GSD
☆    

Canada,
2012-08-17 22:56
(4268 d 06:35 ago)

Posting: # 9076
Views: 21,294
 

 Potvin’s 2-stage adaptive design [Two-Stage / GS Designs]

Hey Everyone,

For Potvin’s 2-stage adaptive design, does anyone know how to simulate for the overall type I error with EMA reference scaling (Full Replicate)?

If there were no EMA reference scaling, then we can simulate random numbers from a normal distribution with mean centered on ln(0.8) to estimate the type I error. But what distribution should we simulate from to account for the random changing nature of the acceptance limits (EMA scaling)?

Thanks a lot in advance for the help :)
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-08-18 02:39
(4268 d 02:52 ago)

@ GSD
Posting: # 9077
Views: 19,890
 

 Potvin’s 2-stage adaptive design + RSABE/ABEL

HI GSD,

welcome to the club!

❝ For Potvin’s 2-stage adaptive design, does anyone know how to simulate for the overall type I error with EMA reference scaling (Full Replicate)?


You are aware that this framework is valid for (strictly speaking: balanced) 2×2×2 crossovers only?

❝ If there were no EMA reference scaling, then we can simulate random numbers from a normal distribution with mean centered on ln(0.8) to estimate the type I error.


Yep. If you want to be consistent with Potvin & Montague go with ln(1.25). Of course αemp. will be the same, but regulators are a strange bunch.

❝ But what distribution should we simulate from to account for the random changing nature of the acceptance limits (EMA scaling)?


Courageous! EMA has some strange POVs concerning Potvin’s methods (see these posts). Leaving this aside, there are some more problems to solve:
  • EMA’s crippled ‘models’ (both Methods A and B according to the Q&A-document) assume a common (equal!) variance of test and reference. Weird, because in many cases (pharmaceutical technology improves) and the variance of the reference is higher than the one of the test. In simulations you should account for that.
  • The sample size estimation for stage 2 is not straightforward. This is actually the major obstacle. Assuming you already have a validated framework, you would run simulations like the two Lázlós did*. But if you set up your own simulations in every run you would have to embed the same stuff – or simply use their tables, which have a too wide step size (CVs and GMRs 5%). I had the same idea a while ago and estimated the runtime on my machine with six years 24/7).
  • How would you estimate power between stages? Well, EMA has some problems with this approach anyhow. Maybe you go with the ‘rumour method’ (see this post and followings), but definitely not with C/D.
  • At last coming back to you question. Simulate as usual - but you have to employ a conventional ABE-model if CV<30%, ABEL above + the 50% cap + the GMR-restriction. At CV 30%, half of the studies have to assessed by ABE and the other half by ABEL.
If you consider the ‘table-approach’ I would suggest to recalculate the Lázlós’ tables for a least 105 studies each. For my taste the given ten thousand sims are too ‘granular’. It should be enough to simulate GMRs of 0.95 (for 1–βemp.) and 1.25 (for αemp.) with a smaller step size of CVs (and probably for >80% as well).

[image]

[image]

BTW, the plots nicely reveal the impact of EMA’s 50% cap on the sample size. For GMR 0.95 and CV 75% according to EMA you would need 35 subjects but only 20 for the FDA…


  • László Endrényi, László Tóthfalusi
    Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs
    J Pharm Pharmaceut Sci 15(1), 73–84 (2011)
    online

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
GSD
☆    

Canada,
2012-08-31 18:30
(4254 d 11:01 ago)

@ Helmut
Posting: # 9134
Views: 19,699
 

 Potvin’s 2-stage adaptive design + RSABE/ABEL

Hi Helmut,

Thanks a lot for your reply! The following are my thoughts in regards to the issues:

❝ EMA’s crippled ‘models’ (both Methods A and B according to the Q&A-document) assume a common (equal!) variance of test and reference. Weird, because in many cases (pharmaceutical technology improves) and the variance of the reference is higher than the one of the test. In simulations you should account for that


Yes, I will take into account the difference in variance of the test and reference while doing the simulation.

❝ The sample size estimation for stage 2 is not straightforward. This is actually the major obstacle. Assuming you already have a validated framework, you would run simulations like the two Lázlos did. But if you set up your own simulations in every run you would have to embed the same stuff – or simply use their tables, which have a too wide step size (CVs and GMRs 5%). I had the same idea a while ago and estimated the runtime on my machine with six years 24/7).


As for the major obstacle you mentioned, I think I should be able to get a fairly good estimate by first obtaining all of the power/sample size tables (as the two Lazlos did), but for a much smaller increment (e.g. increments of CVs and GMRs of say 0.5% or 1%). I will then run the actual simulation and for each run query the result tables. Assuming I use 10^4 simulations, the production of all these tables should take no longer than 2 weeks.

❝ At last coming back to you question. Simulate as usual - but you have to employ a conventional ABE-model if CV<30%, ABEL above + the 50% cap + the GMR-restriction. At CV 30%, half of the studies have to assessed by ABE and the other half by ABEL.


This is actually the main obstacle for me. When I simulate from a normal distribution with mean centered on ln(0.8) or ln(1.25) and employ the different models based on the different sample CVs, I get a wildly inflated type I error.

Let’s just start with the usual 1-stage design with a full replicate model and consider EMA scaling. If I simulate as above, I get an empirical type I error of 5.1%, 8.5% 25.4%, 36.9%, 47.1% and 49.8% for CVs of 10%, 30%, 35%, 40%, 50% and 70% respectively. (The sample size that I used in each case was powered at 80%).

Should I try instead to find the center of the distribution that would allow me to control the type I error at exactly 5%? For example, when the CV is 40%, if I simulate from a normal distribution with mean centered on 0.748, it will give me an empirical type I error of ~5%.


Edit: Standard quotes restored. [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-08-31 20:37
(4254 d 08:54 ago)

@ GSD
Posting: # 9135
Views: 19,835
 

 Tough job

Hi GSD!

❝ […] I will then run the actual simulation and for each run query the result tables. Assuming I use 10^4 simulations, the production of all these tables should take no longer than 2 weeks.


Yes, but as I said at the end of my post not only the grid is too coarse, but IMHO 104 sim’s are not enough.

❝ When I simulate from a normal distribution with mean centered on ln(0.8) or ln(1.25) and employ the different models based on the different sample CVs, I get a wildly inflated type I error.


A technical question: How do you define the mean and CV in log-scale? Maybe this post helps.

❝ Let’s just start with the usual 1-stage design with a full replicate model and consider EMA scaling. If I simulate as above, I get an empirical type I error of 5.1%, 8.5% 25.4%, 36.9%, 47.1% and 49.8% for CVs of 10%, 30%, 35%, 40%, 50% and 70% respectively. (The sample size that I used in each case was powered at 80%).


Terrible – and surprising. With a CV of 10% we should expect to very rarely have to employ the ABEL model. Stupid question: Which one of Potvin’s methods are you aiming at? With Method B I would not expect such a large inflation. Right now I’m re-calculating their tables (up to 50% CV with a narrower grid and the exact method rather than the shifted t). Here is a Krig-surface in order to interpolate sim’s which are not completed yet:

[image]

The critical region runs from [n1 12 – 21 | CV 26] to [n1 44 - 50 | CV 60 – 48]. Below CV 12% (n1 12) and 30% (n1 60) the framework is extremely conservative (αemp. ~ αadj.).

❝ Should I try instead to find the center of the distribution that would allow me to control the type I error at exactly 5%? For example, when the CV is 40%, if I simulate from a normal distribution with mean centered on 0.748, it will give me an empirical type I error of ~5%.


Doesn’t make sense. ln(0.8)/ln(1.25) are fixed margins and we are testing for no inflation there. 0.748 is outside the margin and the GMR-restriction would reject all studies.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
GSD
☆    

Canada,
2012-08-31 21:23
(4254 d 08:08 ago)

@ Helmut
Posting: # 9136
Views: 19,541
 

 Tough job

Hi Helmut!

Thanks for the graph!

❝ Yes, but as I said at the end of my post not only the grid is too coarse, but IMHO 104 sim’s are not enough.


I actually tried both the 10^4 and 10^5 # of simulations (for about 5 different scenarios), the results are almost identical. I realized that 10^6 is a must when you go for Potvin’s 2-stage.

❝ Which one of Potvin’s methods are you aiming at?


Methods C/D.

❝ Doesn’t make sense. ln(0.8)/ln(1.25) are fixed margins and we are testing for no inflation there. 0.748 is outside the margin and the GMR-restriction would reject all studies.


But if any of your sample CVs exceeds 30%, the acceptance limits are widened. They are no longer fixed at 0.8 and 1.25. When you enter your simulation with a starting CV of say 35%, most of your random samples will have an acceptance limit in the neighborhood of [0.7723, 1.2948]. In this case, when you still center it on ln(0.8), the simulated “type I error” will be huge.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-09-01 17:09
(4253 d 12:22 ago)

@ GSD
Posting: # 9140
Views: 19,983
 

 C/D for EMA? Good luck!

Hi GSD!

❝ […] I realized that 10^6 is a must when you go for Potvin’s 2-stage.


Yes – 1/√N hits.*

❝ ❝ Which one of Potvin’s methods are you aiming at?


❝ Methods C/D.


Are you aware that the acceptability of C/D for EMA is limited (pun intended)? If you want to deal with ABEL, I reckon only Method B might be acceptable – though some member states even mistrust the validity of the power evaluation in the intermediate analysis at all. Right now C/D is only acceptable (without major discussions) in the US and Canada. Since HPFB does not allow scaling this leaves the FDA with an easier model (without the 50% cap – but the discontinuity at 30%).

❝ But if any of your sample CVs exceeds 30%, the acceptance limits are widened. They are no longer fixed at 0.8 and 1.25. When you enter your simulation with a starting CV of say 35%, most of your random samples will have an acceptance limit in the neighborhood of [0.7723, 1.2948]. In this case, when you still center it on ln(0.8), the simulated “type I error” will be huge.


Sounds like chicken-and-egg to me. :-D The ratio and CV in the sims are independent – but the scaled acceptance range depends on the actual CV in every simulated study…


  • Takes a while to obtain a stable estimate (the red line on top is the critical limit of α>0.05):

    [image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-09-05 14:57
(4249 d 14:34 ago)

@ Helmut
Posting: # 9150
Views: 19,712
 

 Surprise?

Dear Helmut, dear GSD!

❝ ❝ Let’s just start with the usual 1-stage design with a full replicate model and consider EMA scaling. If I simulate as above, I get an empirical type I error of 5.1%, 8.5% 25.4%, 36.9%, 47.1% and 49.8% for CVs of 10%, 30%, 35%, 40%, 50% and 70% respectively. (The sample size that I used in each case was powered at 80%).


❝ Terrible – and surprising.


This is not a surprise if you think twice.

Compare GSD's values with figure 4 from the paper of the two Laszlos
Tothfalusi, Endrenyi
"Limits for the scaled average bioequivalence of highly variable drugs and drug products"
Pharm. Res. Vol.20, 382-389 (2003)
[image]
Open symbols are the scaled ABE simulations.

Despite the fact that they have simulated 2x2 cross-over studies and used sigma0 = 0.2 the qualitative behaviour resembles the numbers given by GSD.

To have an empirical alpha IMHO it is indeed necessary to simulate studies at an GMR which is identical with the widened acceptance limits dependent on the CV used for the simulation.
This is irrespective of the fact that we use the sample CV for widening the BE limits in the evaluation of each study. In simulating with a certain CV we know it and therefore we know also the inequivalence margins that would apply for the population.
We act the same way in case of normal ABE simulating with an GMR at the BE limit 1.25 (or 0.8) but use the sample point estimate in evaluation of each simulated study.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-09-05 15:35
(4249 d 13:56 ago)

@ d_labes
Posting: # 9151
Views: 19,409
 

 Got it. Ages for sims to complete

Dear Detlew!

❝ This is not a surprise if you think twice.

❝ To have an empirical alpha IMHO it is indeed necessary to simulate studies at an GMR which is identical with the widened acceptance limits dependent on the CV used for the simulation.

❝ This is irrespective of the fact that we use the sample CV for widening the BE limits in the evaluation of each study. In simulating with a certain CV we know it and therefore we know also the inequivalence margins that would apply for the population.


Oh yes, you are right. But IMHO this is a true show-stopper (aka end of story) unless one has a super-computer in the backyard. Even if we use just 104 sim’s in the sample size estimation that would mean 1010 sim’s for each point of the grid. BTW, even that would only work without intermediate power.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-09-05 16:51
(4249 d 12:39 ago)

@ Helmut
Posting: # 9152
Views: 19,467
 

 Life belt

Dear Helmut!

❝ ... But IMHO this is a true show-stopper (aka end of story) unless one has a super-computer in the backyard. Even if we use just 104 sim’s in the sample size estimation that would mean 1010 sim’s for each point of the grid. BTW, even that would only work without intermediate power.


Full ACK.

IMHO we should be glad that we have the option to widen the BE limits at all according to the EMA guidance in case of HVD's. This reduces the burden considerably for such formulations.
Fetching also the advantages of a 2-stage design in this situation would be too much overkill.

Moreover we have to be convinced that our API is HVD. That means that we must have a good reliable estimate of the intra-subject CV (of the reference) which has to be >30%. In that situation an additional life belt for missing or uncertain informations to plan the study aka "2-stage design" is superfluous.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-09-05 17:20
(4249 d 12:11 ago)

@ d_labes
Posting: # 9153
Views: 19,464
 

 Life belt

Dear Detlew!

❝ IMHO we should be glad that we have the option to widen the BE limits at all according to the EMA guidance in case of HVD's. This reduces the burden considerably for such formulations.


Cough. Cmax only. ;-)
Pfizer’s world-record is 300+ subjects in a full replicate (no scaling)…

❝ Fetching also the advantages of a 2-stage design in this situation would be too much overkill.


That’s why I called GSD’s attempts in my first reply “courageous”.

❝ Moreover we have to be convinced that our API is HVD. That means that we must have a good reliable estimate of the intra-subject CV (of the reference) which has to be >30%. In that situation an additional life belt for missing or uncertain informations to plan the study aka "2-stage design" is superfluous.


Define “good reliable estimate of the intra-subject CV”. :-D
Especially PPIs show CVs varying across studies from 30–90%. Some studies passed with 24 subjects whilst others failed with 120. Occasionally higher CVs were seen in larger studies; it’s more likely then to face outliers inflating the CV. That’s an example where a 2-stage design would not help because the CV estimated from the first stage would be too small anyway.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
K2K
★    

India,
2013-10-07 16:43
(3852 d 12:48 ago)

@ Helmut
Posting: # 11619
Views: 17,848
 

 Power for Two stage Design

Dear Sir,

How to calculate Power for Two stage Design.

thanks and regards
Karthik
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-10-07 16:59
(3852 d 12:32 ago)

@ K2K
Posting: # 11620
Views: 17,882
 

 Power for Two stage Design

Hi Karthik,

❝ How to calculate Power for Two stage Design.


Not sure what you exactly want. :confused:
In the published frameworks power is fixed (Potvin et al. and Montague et al. 80%, Fuglsang 90%). With the exception of combinations of small stage 1 sample sizes and high CVs actual power exceeds these values (see this slide for Potvin’s Method B).
Or do you want to perform your own simulations?* Tough job. Simulate at least 105 BE-studies in a grid of N1 and CV with your target GMR. Empiric power = number of studies passing / number of simulations. Depending on the software / grid size expect run-times of a couple of hours to a few weeks…


  • If you want to use a combination of GMR and target power which is not published yet, you have to demonstrate no substantial inflation of the patient’s risk, i.e., you have to find a suitable αadj. Simulate with a GMR = one of the borders of the acceptance range. Since the convergence is slow (see this post) you have to simulate 106 studies at each grid point.
    Hints:
    Method B ⇒ Method C: αadj ↓ (= wider CI)
    Target power ↑: αadj
    GMR moving away from 1: αadj

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
K2K
★    

India,
2013-10-08 16:35
(3851 d 12:56 ago)

@ Helmut
Posting: # 11626
Views: 17,720
 

 Power for Two stage Design

Dear Sir

I am also little bit confused.

After completion of stage 1 we want to evaluate the power. If power is >= 80 evaluate BE at stage 1 (alpha=0.05) and if power < 80% evaluate BE at stage 1 (alpha=0.0294) ...............

After evaluation of power only we can plan further step (If i am wrong pls guide me). In most of your lecture and discussion you mentioned No a posteriori Power.

So In two stage study design how to evaluate the Power for our data.

Any validated method is there to evaluate the power for two stage study design or can I use PowerTOST or Winnonlin to evaluate the power.

Sorry if i m confused you and pls guide me.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-10-08 16:58
(3851 d 12:33 ago)

@ K2K
Posting: # 11627
Views: 17,786
 

 “Interim power”

Hi Karthik,

❝ After completion of stage 1 we want to evaluate the power. If power is >= 80 evaluate BE at stage 1 (alpha=0.05) and if power < 80% evaluate BE at stage 1 (alpha=0.0294)


Yes, that’s “Method C”.

❝ After evaluation of power only we can plan further step.


Correct.

❝ In most of your lecture and discussion you mentioned No a posteriori Power.


That’s not a posteriori power, but part of the decision tree of the interim analysis. A posteriori would be after the study is completed (passing in stage 1 or in the combined analysis). See also what Potvin et al. state concerning their Example 2:

Using the data from both stages, we find that the two-sided CI for the ratio (T/R) of geometric means at an α level of 0.0294 and DF = 17 is 88.45–116.38%, which meets the 80–125% acceptance criterion. We stop here and conclude BE, irrespective of the fact that we have not yet achieved the desired power of 80% (power = 66.3%).

(my emphasis)

❝ So In two stage study design how to evaluate the Power for our data.


After stage 1
  • Method C
    fixed T/R 0.95, observed CV, n1, α 0.05
  • Method B
    fixed T/R 0.95, observed CV, n1, α 0.0294

❝ Any validated method is there to evaluate the power for two stage study design or can I use PowerTOST or Winnonlin to evaluate the power.


PowerTOST is fine (see this example for Method B). Power in Phoenix/WinNonlin is wrong – don’t use it!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
K2K
★    

India,
2013-10-09 14:39
(3850 d 14:52 ago)

@ Helmut
Posting: # 11630
Views: 17,576
 

 “Interim power”

Dear Sir,

Now I am cleared because of your explanation this doubt is in my head for long time but I hesitate to ask.

Thank you very much.

In PowerTOST actually what formula they used and how to validate PowerTOST from my end.

According to my SOP I want to mention the formula so if possible pls share to me.

Pls guide me.

thanks and regards
Karthik
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-10-09 16:10
(3850 d 13:21 ago)

@ K2K
Posting: # 11632
Views: 17,916
 

 Validation of PowerTOST

Hi Karthik,

❝ In PowerTOST actually what formula they used and how to validate PowerTOST from my end.

❝ According to my SOP I want to mention the formula so if possible pls share to me.


Hint:
library(PowerTOST)
help(PowerTOST)


Scroll down to the general references. For the “exact” method see help(OwensQ). Background and formulas in
{R-folder}/library/PowerTOST/doc/BE_power_sample_size_excerpt.pdf.
For validation of sampleN.TOST() see help(data2x2) and the script test_2x2.R in the /tests-folder. This may serve as an indirect validation of power.TOST() – which is repeatedly called within sampleN.TOST() until the target power is reached.
You can also use the two example data sets of Potvin et al., though you have to use method='shifted' in power.TOST() – since they used the shifted central t-distribution in their simulations for speed reasons.

library(PowerTOST)
n  <- 12
m  <- 'shifted'
a0 <- 0.05
a1 <- 0.0294


Example 1, stage 1
Method B
                                                            power  Ntotal
Potvin                                                       75.6    14
round(100*power.TOST(alpha=a1, CV=0.1456, n=n, method=m),1)  75.6
sampleN.TOST(alpha=a1, CV=0.1456, method=m)                          14


Method C
                                                            power  Ntotal
Potvin                                                       84.1    NA
round(100*power.TOST(alpha=a0, CV=0.1456, n=n, method=m),1)  84.1


Example 2, stage 1
Method B
                                                            power  Ntotal
Potvin                                                       50.5    20
round(100*power.TOST(alpha=a1, CV=0.1821, n=n, method=m),1)  50.5


Method C
                                                            power  Ntotal
Potvin                                                       64.9    20
round(100*power.TOST(alpha=a0, CV=0.1821, n=n, method=m),1)  65.0


both methods
                                                                   Ntotal
sampleN.TOST(alpha=a1, CV=0.1821, method=m)                          20


a posteriori power of the pooled data set, both methods
                                                            power
Potvin                                                       66.3
round(100*power.TOST(alpha=a1, CV=0.2167, n=20, method=m),1) 66.7


Again: Don’t use Phoenix/WinNonlin, which reports after stage 1 for Method B (0.0294) power 93.4% (example 1) and 80.4% (example 2)…

If you want to cross-validate PowerTOST() against commercial software (e.g., NQuery Advisor or PASS) use method='shifted' for the latter because it employs the shifted central t-distribution instead of the exact method.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2013-10-09 18:11
(3850 d 11:20 ago)

@ K2K
Posting: # 11635
Views: 17,607
 

 Phoenix/WinNonlin - power

Dear Karthik,

Additionally to Helmut's advice

❝ Again: Don’t use Phoenix/WinNonlin ...

I would like to point you to this thread which explains in the subsequent posts why you shouldn't.

It came out in this thread that the power calculated in WinNonlin is the power for a test of difference and not the power for a test of equivalence which in Bioequivalence studies has to be used.

At least in 2011 (date of the thread) this was so. As Helmut's numbers show it seems unchanged up to now.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-10-09 20:19
(3850 d 09:12 ago)

@ d_labes
Posting: # 11636
Views: 18,845
 

 Phoenix/WinNonlin - power

Hi Detlew & Karthik,

❝ […] the power calculated in WinNonlin is the power for a test of difference and not the power for a test of equivalence which in Bioequivalence studies has to be used.


❝ At least in 2011 (date of the thread) this was so. As Helmut's numbers show it seems unchanged up to now.


Yep. Wrong in all versions up to the current release (6.3.0.395). Pharsight is considering a calculation for BE based on the noncentral t in v6.4 (scheduled for the 1st half of 2014). In the last internal release I saw (v6.4.0.334 of 29 May) there was already a function Power_TOST based on the shifted central t implemented. From an example on the Extranet (CV 0.233927817734416, T/R 1.04658784494175, n 20):

power.TOST(…, method='exact')           0.72740071
power.TOST(…, method='noncentral')      0.72740028
power.TOST(…, method='shifted')         0.71966084
Phoenix/WinNonlin 6.4.0.334 Power_TOST  0.71966088


Still an approximation of an approximation, but not completely off. There is hope. ;-)


Edit 2014-01-10: Pre-release V1.4 (Core Version 21Nov2013)

Phoenix/WinNonlin 6.4.0.511 Power_TOST  0.72740029 Yeah!


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
K2K
★    

India,
2013-10-10 10:36
(3849 d 18:55 ago)

@ Helmut
Posting: # 11638
Views: 17,525
 

 power

Dear sir

Thanks for your help and guide.:-D.

thanks and regards
Karthik
UA Flag
Activity
 Admin contact
22,993 posts in 4,828 threads, 1,656 registered users;
48 visitors (0 registered, 48 guests [including 4 identified bots]).
Forum time: 05:31 CEST (Europe/Vienna)

Never never never never use Excel.
Not even for calculation of arithmetic means.    Martin Wolfsegger

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5