Potvin’s 2-stage adaptive design + RSABE/ABEL [Two-Stage / GS Designs]

posted by GSD – Canada, 2012-08-31 18:30 (4614 d 00:27 ago) – Posting: # 9134
Views: 21,183

Hi Helmut,

Thanks a lot for your reply! The following are my thoughts in regards to the issues:

❝ EMA’s crippled ‘models’ (both Methods A and B according to the Q&A-document) assume a common (equal!) variance of test and reference. Weird, because in many cases (pharmaceutical technology improves) and the variance of the reference is higher than the one of the test. In simulations you should account for that


Yes, I will take into account the difference in variance of the test and reference while doing the simulation.

❝ The sample size estimation for stage 2 is not straightforward. This is actually the major obstacle. Assuming you already have a validated framework, you would run simulations like the two Lázlos did. But if you set up your own simulations in every run you would have to embed the same stuff – or simply use their tables, which have a too wide step size (CVs and GMRs 5%). I had the same idea a while ago and estimated the runtime on my machine with six years 24/7).


As for the major obstacle you mentioned, I think I should be able to get a fairly good estimate by first obtaining all of the power/sample size tables (as the two Lazlos did), but for a much smaller increment (e.g. increments of CVs and GMRs of say 0.5% or 1%). I will then run the actual simulation and for each run query the result tables. Assuming I use 10^4 simulations, the production of all these tables should take no longer than 2 weeks.

❝ At last coming back to you question. Simulate as usual - but you have to employ a conventional ABE-model if CV<30%, ABEL above + the 50% cap + the GMR-restriction. At CV 30%, half of the studies have to assessed by ABE and the other half by ABEL.


This is actually the main obstacle for me. When I simulate from a normal distribution with mean centered on ln(0.8) or ln(1.25) and employ the different models based on the different sample CVs, I get a wildly inflated type I error.

Let’s just start with the usual 1-stage design with a full replicate model and consider EMA scaling. If I simulate as above, I get an empirical type I error of 5.1%, 8.5% 25.4%, 36.9%, 47.1% and 49.8% for CVs of 10%, 30%, 35%, 40%, 50% and 70% respectively. (The sample size that I used in each case was powered at 80%).

Should I try instead to find the center of the distribution that would allow me to control the type I error at exactly 5%? For example, when the CV is 40%, if I simulate from a normal distribution with mean centered on 0.748, it will give me an empirical type I error of ~5%.


Edit: Standard quotes restored. [Helmut]

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,697 registered users;
24 visitors (0 registered, 24 guests [including 7 identified bots]).
Forum time: 18:57 CEST (Europe/Vienna)

Do not put your faith in what statistics say until you have carefully
considered what they do not say.    William W. Watt

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5