ElMaestro
★★★

Denmark,
2009-10-03 19:05
(4214 d 12:36 ago)

(edited by ElMaestro on 2009-10-03 20:52)
Posting: # 4291
Views: 30,500

non-central t, power, R: [R for BE/BA]

Dear all,

In a previous thread I asked what the power in a 2,2,2-BE trial is at N=6, CV=65% and T/R=95%.
The things is, Fartssie comes up with a negative reply. HS's implementation in R of the equations given by Julious (Statistics in Medicine, 2004 vol. 23, 1921-1986) also evaluates to a negative figure.
A central role here is played by the non-central t distribution. My guess was that we were torturing the algorithm, and I think this was HS's point as well (quote "I think we are pushing software to the limits, i.e., running into troubles of getting a reasonable value of the noncentral t-distribution (...)").

This I wanted to look a little into using R as my reference. Wikipedia tells us what the distribution's density function looks like. I made an independent implementation of this expression, plus an implementation of the integrated density function from t and downwards, which is basically the same as the function pt in R which is based on ASA243 according to the documentation.

The values from pt can be easily reproduced, it turns out. But I cannot "improve" them, although I have all the time in the world to integrate with ridiculously small stepsizes and tweak and fiddle. As it turns out in the concrete example, we are asking for probabilities from the noncentral t dist in a region of t-values plusminus 2.13 and noncentrality parameters 0.50 and -0.80. These are not at all extreme values. I think it sounds reasonable that "prob1" and "prob2" (using HS's terminology) are around 0.89 and 0.17 [try and plot the densities as function of t, eyeball the relative amount of curve left of the indicated values].

For this reason I am becoming a sceptic now. Either I am not able to make a better implementation than ASA243, or ...please forgive me... the equations given by Julious (and others) are not exact in certain situations (for example: I only see the issue when t2 < nc2), or something entirely else is in play.

Can anyone shed light on this?

Let me add: If the problem lies in ASA243 then I'd love to write one that gets it right, of course.)

Many thanks for any input and best regards,
EM.
Helmut
★★★

Vienna, Austria,
2009-10-03 21:45
(4214 d 09:56 ago)

@ ElMaestro
Posting: # 4292
Views: 27,875

non-central t, power, R:

Hi ElMaestro,

very interesting question! I'm too short in time to read Owen's paper again (and likely too stupid to understand it as well). nQuery Advisor 7 (ASA184, I guess) comes up with
The computed power was too low for the approximation to be considered reliable. Use a larger sample size.
The lowest sample size without an error message was 40 (power 5.871678%, more than 10times the value R comes up with)...

Have you considered contacting Russell Lenth (author of ASA243 and the VBA-routines in FARTSSIE)? He writes:

Accuracy

• Formula accuracy
Most computations are "exact" in the sense that they are based on exact formulas for sample size, power, etc.
• Machine accuracy
Even with exact formulas, computed values are inexact, as are all double-precision floating-point computations. Many computations (especially noncentral distributions) require summing one or more series, and there is a serious tradeoff between speed and accuracy. The error bound set for cdfs is 1E-8 or smaller, and for quantiles the bound is 1E-6. Actual errors can be much larger due to accumulated errors or other reasons. Quantiles, for example, are computed by numerically solving an equation involving the cdf; thus, in extreme cases, a small error in the cdf can create a large error in the quantile.
[…] in the case of quantile computations, no warning message is generated for extreme quantiles. If you want a power of .9999 at alpha=.0001, you can expect the computed sample size to not be accurate to the nearest integer! If you specify reasonable criteria, the answers will be pretty reliable.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2009-10-04 21:51
(4213 d 09:50 ago)

@ Helmut
Posting: # 4293
Views: 27,723

non-central t, power, R:

Dear HS,

» Have you considered contacting Russell Lenth (author of ASA243 and the VBA-routines in FARTSSIE)?

Might be the way to go. One could hope that the author of Fartssie reads these discussions and looks into it. It is of course true, depending on how we look at this there is a lot of numerical integration and/or summation going on, and a lot of 'small' errors would indeed sooner or later amount to a 'large' error. To be honest I am just a little sceptical here, because we are not really involving extreme values of t or the noncentality parameters, so I am not at this point fully convinced that this is no more than just a numerical precision issue.
Must read some more papers, I guess. Or implement some C library which works with 117 decimals or something obscene and try anew.

If anyone has any insight into how Al Gore Rhythms can be validated when true results at the test conditions are not known then I'd be happy to learn it. Or even better, how to determine under which conditions ASA243 is accurate to 3 decimals or something like that.

EM.
Helmut
★★★

Vienna, Austria,
2009-10-04 23:58
(4213 d 07:43 ago)

@ ElMaestro
Posting: # 4294
Views: 27,766

non-central t, power, R:

Hi ElMaestro!

» Might be the way to go. One could hope that the author of Fartssie reads these discussions and looks into it.

Hhm, David logged in the last time in mid-March. But he has activated his e-mail in the forum and I had some nice conversations in the past.

» […] To be honest I am just a little sceptical here, because we are not really involving extreme values of t or the noncentality parameters, so I am not at this point fully convinced that this is no more than just a numerical precision issue.
» Must read some more papers, I guess. Or implement some C library which works with 117 decimals or something obscene and try anew.

Sorry I'm of no big help now (still a lot of slides to prepare for a two-day workshop next week), but I think the algo uses a Taylor-expansion which is truncated after the first (?) term. One thing coming into my mind is the possibility that when we run into numerical issues, they may be resolved by going 'deeper' into the Taylor series?

» If anyone has any insight [...]

Sancta simplicitas, non!

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-05 12:58
(4212 d 18:43 ago)

@ ElMaestro
Posting: # 4296
Views: 27,960

non-central t is not the question

Dear ElMaestro,

don't waste your time in attempting to improve Al Gore's rhythms.
The inventors of them for the non-central t distri had already made every effort for its accuracy and reliability, I think.

Your time is better invested for NLYW (blonde downstairs) or for the Seven seas .

The formula given in Julious is definitely approximate, albeit a good one for the usual power range ≥70% as the sample size calculations with that formula show.

The exact algo for the power of the equivalence test (two one sided tests aka TOST) in a 2x2 cross-over is ([1],[2]):

Power=Prob(t1≥t1-alpha,(n-2) and t2≤-t1-alpha,(n-2)|bioequivalence)

The t1 and t2 values are the t-variates of the two one-sided t-tests of bioequivalence.

(t1, t2) have according to Owen a bivariate non-central t-distribution and the power can be calculated as the difference of two definite integrals (Owen's Q functions dependent on df=degrees of freedom and with four arguments)

Power=Qdf(-t1-alpha,(n-2),d2,0,R)-Qdf(t1-alpha,(n-2),d1,0,R)

See [1] and [2] for that and for the formulas for d1, d2 and R.

So you need not a better algo for the non-central t but rather an algo for Owen's Q-functions .
My SAS beast has these functions implemented (but not documented!).

» In a previous thread I asked what the power in a 2,2,2-BE trial is at N=6, CV=65% and T/R=95%.

And my sasophylistic answer to the your ElMaestrophylistic question "what the power in a 2,2,2-BE trial is at N=38, CV=65% and T/R=95%." given here, power=4.5%, is based on these Q-functions.
Implementing the Julious formulas (according to Helmut's R-code) in "The power to know" results in power=-0.027799235, same as Fartssie!

[1] Diletti, E., Hauschke, D., and Steinijans, V.W. (1991),
"Sample Size Determination for Bioequivalence Assessment by Means of Confidence Intervals"
International Journal of Clinical Pharmacology, Therapy and Toxicology, Vol. 29, 1 -8.
[2] Hauschke, Steinijans, Pigeot
"Bioequivalence studies in drug development"
Wiley 2007, pages 120-121

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-05 14:53
(4212 d 16:48 ago)

@ d_labes
Posting: # 4297
Views: 27,766

SAS -> R -> SAS

Dear D Labes,

» Implementing the Julious formulas (according to Helmut's R-code) in "The power to know" results in power=-0.027799235, same as Fartssie!

Funny!
My R-code was just ported from Patterson's/Jones' SAS-code. Tweaking the code to calculate power (haha) at n=38:

options(digits=8) alpha  <- 0.05; CV <- 0.65; ratio <- 0.95; n <- 38 Theta1 <- 0.80; Theta2 <- 1/Theta1 s      <- sqrt(2)*sqrt(log(1+CV^2)) t1     <- qt(1-alpha,n-2) nc1    <- sqrt(n)*(log(ratio)-log(Theta1))/s nc2    <- sqrt(n)*(log(ratio)-log(Theta2))/s prob1  <- pt(+t1,n-2,nc1) prob2  <- pt(-t1,n-2,nc2) power  <- prob2-prob1 power

I get -0.027799235, which is either an Ouroboros or another strong hint that SAS and R follow the same algorithm.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-06 14:31
(4211 d 17:10 ago)

@ Helmut
Posting: # 4301
Views: 27,651

Lost in translation

Dear Helmut,

» [...] which is either an Ouroboros or [...]

It is always a pleasure for me to learn from you. Thanks for that nice dragon I'm not aware of until now.

Seems to me as an allegory of some/many things I/we do in my/our daily work.
(delete as applicable)

Regarding the translation SAS->R->SAS I must notice that we are in great luck to have the same results after that cycle.
Others are not if I think about f.i. english->german->english translations.
Although such back and forth translations are seen as QA measure in doing translations of study documents, f.i. volunteers informed consent which must be in the language of the country a study is performed in.

To try out what happens with automatically translations visit Lost in translation.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-06 14:42
(4211 d 16:59 ago)

@ d_labes
Posting: # 4302
Views: 27,667

Lost in translation

Dear D Labes!

» » […] an Ouroboros
»
» Thanks for that nice dragon I'm not aware of until now.

As a trained chemist I heard of Kekulé's daydream which lead him to the discovery of the unsaturated ring structure of benzene. That's why I know this beast.

» Seems to me as an allegory of some/many things I/we do in my/our daily work.
» (delete as applicable)
Nothing to delete.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-06 15:14
(4211 d 16:27 ago)

@ Helmut
Posting: # 4303
Views: 27,573

Macacus cynocephalus

Dear Helmut!

» As a trained chemist I heard of Kekulé's daydream

Here comes the true Kekulé's daydream:

Source:
Berichte der durstigen Chemischen Gesellschaft
Unerhörter Jahrgang, Nr. 20 zum 20.9.1886

Translated by babelfish:
Reports of the thirsty chemical society
outrageous class, No. 20 to 20.9.1886

Back to German
Reports der unverschämten Kategorie der durstigen chemischen Gesellschaft,
Nr. 20 bis 20.9.1886

To English again:
Report of the impudent category of the thirsty chemical society,
No. 20 to 20.9.1886 ....

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-06 21:46
(4211 d 09:55 ago)

@ d_labes
Posting: # 4306
Views: 27,596

Lost in translation

Dear D Labes!

1. DE → EN
Reports of the thirsty Chemical Society
Unheard-Year, No. 20 at 20.9.1886
2. EN → DE
Berichte der durstigen Chemical Society
Unerhört Jahrgang, Nr. 20 am 20.9.1886
3. DE → EN
Reports of the thirsty Chemical Society
Unheard-Volume, No. 20 on 20.9.1886
4. EN → DE
Berichte der durstigen Chemical Society
Unerhört-Band, Nr. 20 am 20.9.1886
5. DE → EN
Reports of the thirsty Chemical Society
Unheard-Volume, No 20 on 20.9.1886
What looked like a strange attractor first, stabilised after a few iterations and reached convergence in step 5.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2009-10-05 17:32
(4212 d 14:09 ago)

@ d_labes
Posting: # 4298
Views: 27,976

non-central t is not the question

Dear dlabes,

» Your time is better invested for NLYW (blonde downstairs) or for the Seven seas .

I agree. One of the options you mention would get higher priority then the other if I had a choice.

» The formula given in Julious is definitely approximate (...)

Cool. It becomes more and more difficult to find out who to trust.

» So you need not a better algo for the non-central t but rather an algo for Owen's Q-functions .

Excellent input. The non-central t dist is hereby dead and buried.
Seems this is a good place to start, and this power document gives the Qv function. Must see if I can implement that.

Thanks for the good post!

EM.
d_labes
★★★

Berlin, Germany,
2009-10-06 16:45
(4211 d 14:56 ago)

@ ElMaestro
Posting: # 4304
Views: 27,903

Power of TOST in R

Dear ElMaestro!

» Excellent input. The non-central t dist is hereby dead and buried. Seems this is a good place to start, and this power document gives the Qv function. Must see if I can implement that.

Once again: Your time is better invested for whatever you prefer .
Also I confess that implementing it itself can be very satisfactory!
('Oh what a great guy I am which can solve such complicated definite integral!')

Have a look at the R package MBESS.
If I had seen it right there is a function power.equivalence.md that will give you some spare time.
Could not test it by myself. No spare time at moment.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-07 02:18
(4211 d 05:23 ago)

@ d_labes
Posting: # 4309
Views: 27,597

Power of TOST (R package MBESS)

Dear D Labes!

» Have a look at the R package MBESS. If I had seen it right there is a function power.equivalence.md that will give you some spare time.

Quick tests

require(MBESS) n  <- c(8,12,18,24,30,40,60) # total sample size df <- n-2 power.equivalence.md.plot(.05, logscale=TRUE, .8, 1.25, .20,                           n, df, 'Diletti, Figure 1c')

In the text output power is the theoretical value 0.05 (=α) at the borders of the acceptance range.

Now for ElMaestro's candidates:
n <- 6 power.equivalence.md(.05, logscale=TRUE, 0.8, 1.25, 0.95, 0.65, n, n-2) [1] 0.001048352 n <- 38 power.equivalence.md(.05, logscale=TRUE, 0.8, 1.25, 0.95, 0.65, n, n-2) [1] 0.01392505
Yet some other values for our collection.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-07 08:46
(4210 d 22:55 ago)

(edited by d_labes on 2009-10-07 09:07)
@ Helmut
Posting: # 4310
Views: 27,643

Power of TOST (R package MBESS)

Dear Helmut!

» Quick tests

Oh actually quick! But dirty .
I assume that your 5th parameter is meant as CV.
But the function requires sigma, I think.

With that modification

library(MBESS) CV <- 0.65 sigma<-sqrt(log(1+(CV)**2)) n <- 6 power.equivalence.md(0.05,TRUE, 0.8, 1.25, 0.95, sigma, n, n-2) [1] 0.001620383 n <- 38 power.equivalence.md(0.05,TRUE, 0.8, 1.25, 0.95, sigma, n, n-2) [1] 0.04509401

The answer for n=38 seems to me very sasophylistic home-brewed magic and Q-functionalistic .
Not so astonishing if you look at the author of that pice of R-software.

PS: I have just seen that the documentation suggest that CV can be used in case of log. But this seems erroneous.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-07 12:38
(4210 d 19:03 ago)

@ d_labes
Posting: # 4314
Views: 27,503

Power of TOST (R package MBESS)

Dear D Labes!

» Oh actually quick! But dirty .

I always thought of it as a nice combination.

» The answer for n=38 seems to me very sasophylistic home-brewed magic and Q-functionalistic .
» Not so astonishing if you look at the author of that pice of R-software.

Ah, Kem Phillips!

» PS: I have just seen that the documentation suggest that CV can be used in case of log. But this seems erroneous.

Exactly. The example in the documentation drove me to my quickshot.

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-07 11:32
(4210 d 20:09 ago)

@ d_labes
Posting: # 4312
Views: 27,604

The Power at limits

Dear All!

Back from the date with the Blonde downstairs an having some spare time ,

Seems ElMaestro's question is able to put the power to the limits.
Here results of power.equivalence.md around N=6:

R code:
library(MBESS) alpha    <- 0.05 logscale <- TRUE ltheta1  <- 0.8 ltheta2  <- 1/ltheta1 ldiff    <- 0.95 CV       <- 0.65                # added [HS] sigma    <- sqrt(log(1+(CV)^2)) # added [HS] n        <- c(4,6,8,10,12,14,16) df       <- n-2 pow      <- mapply(power.equivalence.md,n=n,nu=df, MoreArgs =list(alpha, logscale, ltheta1, ltheta2, ldiff, sigma)) res      <- data.frame(n=n, power=pow) res

Result:
    n        power 1  4 0.0045417784 2  6 0.0016203829 3  8 0.0009763342 4 10 0.0007993335 5 12 0.0007966979 6 14 0.0009067329 7 16 0.0011306148

I would expect power is increasing with increasing n.
Or miss I here something?

BTW: My SAS code shows the same effect.
All this is hairsplitting of course.

Edit: Added two lines above to allow copy-pasting of code to R-console. BTW, change the sample size vector to n<-c(4,6,8,10,12,14,16,38,40,152,154,344,346) and watch (the gamma-function needs the faculty, which runs out of steam - at least on 32-bit operating systems). [Helmut]

Regards,

Detlew
d_labes
★★★

Berlin, Germany,
2009-10-07 15:15
(4210 d 16:26 ago)

(edited by d_labes on 2009-10-07 15:30)
@ d_labes
Posting: # 4316
Views: 27,420

Power at limits 2

Dear All!

Must post to myself because Helmut has posted an important finding in the edit line of the post of mine.

»
» [...] BTW, change the sample size vector to n<-c(4,6,8,10,12,14,16,38,40,152,154,344,346) and watch (the gamma-function needs the faculty, which runs out of steam - at least on 32-bit operating systems). [Helmut]

Seems there is an implementation flaw in the code. And to add insult to injury it runs without error. But power is zero for n=346!

After inspecting the code at the R-console
> power.density.equivalence.md function (power_sigma, alpha = alpha, theta1 = theta1, theta2 = theta2,     diff = diff, sigma = sigma, n = n, nu = nu) {     pdim <- length(power_sigma)     power_density <- c(1:pdim)     power_t <- qt(1 - alpha, nu, lower.tail = TRUE, log.p = FALSE)     a <- sigma/sqrt(nu)     power_const <- -(nu/2 - 1) * log(2) - log(gamma(nu/2)) - log(a)     for (isigma in 1:pdim) {         power_d1 <- ((power_sigma[isigma] * power_t * sqrt(2/n)) +             (theta1 - diff))/(sigma * sqrt(2/n))         power_d2 <- ((-power_sigma[isigma] * power_t * sqrt(2/n)) +             (theta2 - diff))/(sigma * sqrt(2/n))         power_phi1 <- pnorm(power_d1, mean = 0, sd = 1, lower.tail = TRUE,  log.p = FALSE)         power_phi2 <- pnorm(power_d2, mean = 0, sd = 1, lower.tail = TRUE,  log.p = FALSE)         power_phi <- power_phi2 - power_phi1         power_density[isigma] <- power_phi * exp(power_const +             (nu - 1) * log(power_sigma[isigma]/(a)) - 0.5 * (power_sigma[isigma]/(a))^2)     }     return(power_density) }
you can see where it is.
So roll your own power.density.equivalence.md and change the line with red marker to
    power_const <- -(nu/2 - 1) * log(2) - lgamma(nu/2) - log(a)
Now we have:
     n        power [...] 11 154 0.8033132631 12 344 0.9839843189 13 346 0.9844217782 14 500 0.9982834559

This is one example of why open source is so beautiful .

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-10-07 22:36
(4210 d 09:05 ago)

@ d_labes
Posting: # 4319
Views: 27,488

package MBESS

Dear D Labes!

» This is one example of why open source is so beautiful.

Have you considered contacting Kem Phillips ( kemphillipscomcast.net) – also about the ambiguities in the documentation?

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-15 14:57
(4202 d 16:44 ago)

(edited by d_labes on 2009-10-15 15:20)
@ Helmut
Posting: # 4366
Views: 27,249

Package MBESS and Power curiosity

Dear Helmut! Dear all R adepts!

Helmut, as you suggested I have contacted Kem Phillips.
He has confirmed that sigma in case of logscale=TRUE is sqrt(log(1+CV^2)).
He has also confirmed the error if n>344 and will change the code in MBESS accordingly.

With respect to the power curiosity in my post "The Power at limits" above we are still in an ongoing discussion. Kem's first supposition was numerical problems. But considering the results I will present below he is thoughtful.

Because PASS2008 gives the same results I have confronted Jerry Hinze with that curiosity. Here his answer:
"This is surprising to me, but since it occurs in a range that is of little interest, I don't see the need to investigate it further. It could be due to a number of numerical problems that occur when dealing with such a small sample size. However, since as you say, it matches SAS Analyst, I would conjecture that it is probably due to the definition of the equivalence test--that is, the definition probably leads to this result."

For me I had seen the need to investigate it further.
I performed an "in-silico" experiment to obtain an empirical power estimate.
I have simulated data sets for 2x2 cross-over with equal subjects in the two sequences using my beasty number cruncher SAS, performed the TOST and recorded the incidences of stating bioequivalence.
Here my results:

ratio0=0.95, CV=0.65, BE limits 0.8 ... 1.25                 Power       SAS      EatMyShorts,    empirical      Power  (R-code with  magic DL   Apfelstrudel,    power  n  (MBESS)   Owen's Q)  (Owen's Q)  trapez meffud (50.000 simul.) ----------------------------------------------------------------  4  0.004542  0.004542    0.004542     0.004542      0.00452.  6  0.001620  0.001620    0.001620     0.001620      0.00146.  8  0.000976  0.000976    0.000976     0.000976      0.00086. 10  0.000799  0.000799    0.000799     0.000799      0.00080. 12  0.000797  0.000797    0.000797     0.000797      0.00074. 14  0.000907  0.000907    0.000907     0.000907      0.00084. 16  0.001131  0.001131    0.001131     0.001131      0.00110. 18  0.001501  0.001501    0.001501     0.001501      0.00132. 20  0.002077  0.002077    0.002078     0.002077      0.00230. 22  0.002952  0.002952    0.002952     0.002952      0.00352. 24  0.004253  0.004253    0.004253     0.004253      0.00468. 30  0.012708  0.012708    0.012708     0.012708      0.01216. 40  0.058717  0.058717    0.058717     0.058715      0.05666. 50  0.157587  0.157587    0.157587     0.157587      0.15790. 
BTW: ElMaestro, can you tell me how Apfelstrudel has found his way to Gent/Belgium ?

As you can see, all results from the different implementations have the same behavior. So Jerry Hinze's presumption seems true that this behavior is inherent to the test definition.

BTW: The empirical power in that region badly converges to a stationary value. One has to use huge number of simulated datasets to obtain a reliable value to some degree. 50.000 is obviously yet to low.

Greetings to all other pettifoggers (Hair splitting needs sharp tools and haven't got anything better to do ).

Regards,

Detlew
ElMaestro
★★★

Denmark,
2009-10-17 12:26
(4200 d 19:15 ago)

(edited by ElMaestro on 2009-10-17 12:43)
@ d_labes
Posting: # 4370
Views: 27,193

Package MBESS and Power curiosity

Ahoy d_labes,

» BTW: ElMaestro, can you tell me how Apfelstrudel has found his way to Gent/Belgium ?

When I made port in Bremerhafen in 1953 I decided to give Apfelstrudel a try; it turned out to be a fully acceptable alternative to the famous Vlaamse Friet and with a pleasantly matching (lack of) nutritional value.

Regarding only 50000 simulations:
You can get the famous software EFG 2.01 from HS. It will do millions of sims in a few seconds. HS - put it up for download?

Here's an update to give you more decimals, if you like:

Apfelstrudel<-function(t, df, ta, tb, ncp, ACCURACY) {   L1 = sqrt(2*pi);   P=0;   step = (tb-ta)/ACCURACY;   foo=ta;   for (i in 1:ACCURACY)     {       y1 = EatMyShorts(foo, df, t, ncp);       y2 = EatMyShorts(foo+0.5*step, df,t, ncp);       y3 = EatMyShorts(foo+step, df,t, ncp);       P=P+step*(y1+y2*4+y3)/6; ## homer simpson's meffud!       foo=foo+step;     }   L1*P }

Best regards
ElMaestro - sailor and simulant.

Edit: BBCoded. [Helmut]
Helmut
★★★

Vienna, Austria,
2009-10-17 13:44
(4200 d 17:57 ago)

@ ElMaestro
Posting: # 4371
Views: 27,576

Apfelstrudel and EFG 2.01

¡Hola ElMaestro!

» When I made port in Bremerhafen in 1953 I decided to give Apfelstrudel a try; it turned out to be […] a pleasantly matching (lack of) nutritional value.

'Apfelstrudel' in Bremerhafen where waitresses can't even pronounce the word correctly (I would except to hear something similar to Appelstrudel - whereas the correct pronounciation is Apflschdrudl)?
Should give the real stuff (of course together with whipped cream) at try in Vienna. 400 kcal/serving are nourishing sailors well.

» You can get the famous software EFG 2.01 from HS […] – put it up for download?

OK. I bundeled the executable EFG 2.01 together with the manual of EFG 1.02. After download (MD5: 892f7df65887edc7e1e4a1817ff3d90c) unzip to a folder of your choice. Should run on Windows (≥95); no installation required. Couldn’t test Win95, but WfW 3.11 with Win32-extensions did not work.

» It will do millions of sims in a few seconds.

Well – a couple of seconds.
Don’t get nervous – it never crashed my systems (Windows 2000 Pro SP4, XP/Pro SP3, and Vista/Pro SP2)*, but doesn’t give a clue about it’s progress. Don’t expect the infamous turning hourclass or a beep when the sim is finished. As software engineers use to say:

‘Quick and dirty – or clean and never.’

BTW, no other software I know can give you a sample size estimation for Denmark’s requirement (CI includes 100%) out of the box!

ratio0=0.95, CV=0.65, BE limits 0.8 ... 1.25     EFG 2.01   EFG 2.01     Normal     Brute force  n             (2×106) --------------------------  4  0.0045418  0.008385  6  0.0016204  0.004700  8  0.0009763  0.000478 10  0.0007993  0.0004685 12  0.0007967  0.0005085 14  0.0009067  0.000647 16  0.0011306  0.0008795 18  0.0015009  0.0012605 20  0.0020775  0.001691 22  0.0029517  0.002559 24  0.0042529  0.003802 30  0.0127083  0.011871 40  0.0587169  0.058030 50  0.1575871  0.1570075

• Runs on Win7 SP1 64bit! [Helmut]

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-10-20 11:35
(4197 d 20:06 ago)

@ ElMaestro
Posting: # 4372
Views: 27,216

EFG and another power curiosity

Ahoy old simulant!

» Regarding only 50000 simulations:
» You can get the famous software EFG 2.01 from HS. It will do millions of sims in a few seconds.

I have given EFG2.1 a try. Great .
Thanks for that stress test for my CPU. But my SAS can better .

What I'm a little bit curious of is that regardless how big the number of sims is, the power for low sample size does not converge to your 'normal' values (at least for CV=30%) .
(This is the 'exact' power I think, using your 'Abbelssstrudel' ?).
Here an example (power in %):
 CV=30%, ratio=0.95, BE limits= 0.8 - 1.25 (fixed in EFG?)       ------ EM's EFG2.1----------  poor mans        normal     grand brute         SAS®   N  ('exact')  100.000  1.000.000  10.000   4    3.42      1.31     1.28       3.33   6    3.95      2.74     2.64       3.93   8    5.96      4.87     4.90       5.56  10    9.54      8.72     8.67       9.32  12   14.85     14.21    14.20      15.08 / 14.47 (two runs)
My own simulations (but of course with lower number of sims because of CPU overheating) do not show this effect.

I would expect a convergence to the 'exact' values because we simulate data with distributional characteristics that match those for which the exact power formula was derived. Do I miss somefink here?

Regards,

Detlew
ElMaestro
★★★

Denmark,
2009-11-23 19:03
(4163 d 11:38 ago)

@ d_labes
Posting: # 4384
Views: 27,121

EFG and another power curiosity

» Ahoy old simulant!

Ahoy, d_labes, perhaps I should get a contract in Hollywood?

» What I'm a little bit curious of is that regardless how big the number of sims is, the power for low sample size does not converge to your 'normal' values (at least for CV=30%) .
» (...) I would expect a convergence to the 'exact' values because we simulate data with distributional characteristics that match those for which the exact power formula was derived.

I guess you are right, thanks for pointing it out. And yes absolutely, simulations of this type MUST converge towards the true value with increasing iterations. I will head down in the programming department and punish the perpetrator right away! Nuts. At any rate, I will post a little later a proposal on how to make the Brute Force parts of EFG 2.01 obsolete.

Sorry I did not see your post before. Or should I say, I am sorry the forum admin screwed up, making it impossible to access the forum, thereby putting a halt to rapid progression of the science that works in the interest of the sick.

EM.
Helmut
★★★

Vienna, Austria,
2009-11-23 19:17
(4163 d 11:24 ago)

@ ElMaestro
Posting: # 4386
Views: 27,016

EFG and another power curiosity

Hi ElMaestro,

nice to hear from you again!

» […] Or should I say, I am sorry the forum admin screwed up, […]

Don't dare!
It was a hell of work to get the f**ing stuff up again.

» […] thereby putting a halt to rapid progression of the science that works in the interest of the sick.

Yeah, that's what it is all about in Big Pharma, isn't it?

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★

Kaohsiung, Taiwan,
2009-10-07 00:08
(4211 d 07:33 ago)

@ ElMaestro
Posting: # 4307
Views: 27,546

non-central t, power, R:

Dear all

I got power = 0.0016 with PASS as the attached picture. Funny.

» In a previous thread I asked what the power in a 2,2,2-BE trial is
» at N=6, CV=65% and T/R=95%.
» The things is, Fartssie comes up with a negative reply. HS's
» implementation in R of the equations given by Julious...

All the best,
-- Yung-jin Lee
bear v2.8.9:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Helmut
★★★

Vienna, Austria,
2009-10-07 01:16
(4211 d 06:25 ago)

@ yjlee168
Posting: # 4308
Views: 28,039

PASS

Dear Yung-jin!

» I got power = 0.0016 with PASS as the attached picture. Funny.

Crazy. From the manual I guessed that PASS also uses AS 243 and would give results comparable with Patterson's/Jones' SAS code (also mine ported to R) and FARTSSIE. But from this post we know that PASS' result agrees with D Labes' magic SAS-homebrew...
That's another example supporting Jaime's advice: Never trust in any piece of software you haven't written yourself (and even then you should be cautious...) Was this the reason to write bear?

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★

Kaohsiung, Taiwan,
2009-10-12 23:17
(4205 d 08:24 ago)

@ Helmut
Posting: # 4344
Views: 27,600

PASS

Dear Helmut,

» [..] you should be cautious...) Was this the reason to write bear?

The reason to develop bear is to simply try to prove that R can do BE/BA too. We have a lot of funs from developing work. Lots to read and learn. That's been good enough for us. At least we have an open sourced package now for BE/BA community.

All the best,
-- Yung-jin Lee
bear v2.8.9:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
ElMaestro
★★★

Denmark,
2009-10-09 11:32
(4208 d 20:09 ago)

@ ElMaestro
Posting: # 4332
Views: 27,541

R code for power

What comes out of lying sleepless at night:

EatMyShorts <- function (x, V, T, W) {     l1 = pnorm((T*x/sqrt(V)) - W);     l2 = (V-1.0)*log(x) -lgamma(0.5*V) - (0.5*(V-2.0))* log (2.0);   l3 = exp(l2);   l4 =dnorm(x);   l1*l3*l4;   } Apfelstrudel<-function(t, df, ta, tb, ncp) {   L1 = sqrt(2*pi);   P=0;   step = (tb-ta)/500;   foo=ta;   for (i in 1:500)     {       y1 = EatMyShorts(foo, df, t, ncp);       y2 = EatMyShorts(foo+step, df,t, ncp);       P=P+step*0.5*(y1+y2); ## the trapez meffud!       foo=foo+step;     }   L1*P } Pwr222 <- function (N, alpha, AccLimitLo, AccLimitHi, TRRatio, CV) {   df = N-2;   tc = qt( 1-alpha, df);   k  = sqrt(2/N);   d1 = (log(TRRatio)-log(AccLimitLo))/(k * sqrt(log(1+CV*CV)));   d2 = (log(TRRatio)-log(AccLimitHi))/(k * sqrt(log(1+CV*CV)));   R  = sqrt(df) * (d1-d2) / (2*tc);     P2 = Apfelstrudel(-tc, df, 0,R,d2);   P1 = Apfelstrudel(tc, df, 0,R,d1);     P2-P1 } 

Increasing the integration steps (to some number higher than 500) may ímprove accuracy, as will e.g. a method like Simpson's formula. Can be played around with.

Best regards
EM.
Ohlbe
★★★

France,
2009-10-09 11:50
(4208 d 19:51 ago)

@ ElMaestro
Posting: # 4333
Views: 27,449

Belgian beer ?

Dear ElMAestro

» What comes out of lying sleepless at night:
» EatMyShorts <- function (x, V, T, W)
» Apfelstrudel<-function(t, df, ta, tb, ncp)

Regards
Ohlbe

Regards
Ohlbe
d_labes
★★★

Berlin, Germany,
2009-10-09 13:50
(4208 d 17:51 ago)

@ ElMaestro
Posting: # 4335
Views: 27,898

Another one - R code for power of TROST

Dear ElMaestro, dear All!

Also suffering from senile "Bettflucht" (insomnia) here is my quick shoot with Owen's Q:
(forgive me, I'm a very beginning adept of R)

# The functions of normal-, t-distributions and integrate() require(stats) # ----------------------------------------------------------------------------- # Owen's Q-function OwensQ <- function (nu, t, delta, a, b) {    # .Machine$double.eps^.5 = 1.490116e-08 on my machine # What method integrate() uses is not clear to me. But seems sophisticated # to assure enough accuracy Qintegral <- integrate(Q.integrand, lower = a, upper = b, nu=nu, t=t, delta = delta, subdivisions = 1000, rel.tol = .Machine$double.eps^.5, abs.tol = .Machine\$double.eps^.5,                 stop.on.error = TRUE, keep.xy = FALSE, aux = NULL)    Q <- Qintegral[[1]]    return(Q)  } # ----------------------------------------------------------------------------- # Integrand of the definit integral in Owen's Q. Used in the call of integrate() # Not useful alone, I think ?      Q.integrand <- function(x, nu, t, delta) {    # assure a vector answer, copied from Power.density.md MBESS    # look for better implementation! is loop necessary?    pdim <- length(x)    dens <- c(1:pdim)    # constants for the loop    sqrtnu   <- sqrt(nu)    lnQconst <- 0.5*log(2*pi)-lgamma(nu/2)-((nu-2)/2)*log(2)    for (ix in 1:pdim) {      xx <- x[ix]      if (xx > 0) {        d <- pnorm(t*xx/sqrtnu - delta, mean = 0, sd = 1, log = TRUE) +             (nu-1)*log(xx) + dnorm(xx, mean = 0, sd = 1, log = TRUE)        dens[ix] <- exp(lnQconst + d)      } else {        # do we need  < 0 ? or must xx>=0? for our power it is.        dens[ix] <-0      }    }    return(dens) } # ----------------------------------------------------------------------------- # Power of two-one-sided-t-tests using OwensQ # In case of logscale=TRUE give ldiff, ltheata1 and ltheta2 as ratios # f.i. ldiff=0.95, ltheta1=0.8, ltheta2=1.25 # in case of logscale=FALSE give ldiff, ltheata1 and ltheta2 as difference to 1 # f.i. ldiff=0.05 (5% difference) ltheata1=-0.2, ltheta2=0.2 (20% equivalence margins) # CV is always the coefficient of variation but as ratio, not % Power.TOST <- function(alpha, logscale, ltheta1, ltheta2, ldiff, CV, n, df) {     theta1 <- ltheta1     theta2 <- ltheta2     diff   <- ldiff     s      <- CV     if (logscale) {         theta1 <- log(ltheta1)         theta2 <- log(ltheta2)         diff   <- log(ldiff)         s <- sqrt(log(1+CV*CV))     }     tval <- qt(1 - alpha, df, lower.tail = TRUE, log.p = FALSE)     delta1 <- (diff-theta1)/(s*sqrt(2/n))     delta2 <- (diff-theta2)/(s*sqrt(2/n))     R <- (delta1-delta2)*sqrt(df)/(2*tval)     p1 <- OwensQ(df,  tval, delta1, 0, R)     p2 <- OwensQ(df, -tval, delta2, 0, R)     power=p2-p1     return(power) }

Some spare 'validation' results (Sample sizes from Diletti et. al. for target power 70%, 80%, 90%):
CV=0.3, BE margins 0.8-1.25, Null ratio=0.95, alpha=0.05
   n power(Q) Apfelstrudel 1 32 0.7177182  0.7177182 2 40 0.8158453  0.8158453 3 52 0.9019652  0.9019652

PS: I think it's enough R power in this thread to have it in the R category now.

Edit: Category changed. require(stats) not needed, because loaded by default. [Helmut]

Regards,

Detlew