ElMaestro ★★★ Belgium?, 20170817 13:02 (1129 d 01:55 ago) Posting: # 17706 Views: 5,192 

Hi all, I decided to look into the effects of imbalance on Potvin designs. This is plain hell, since the Potvin equations only work for balanced situations. So I either have to derive equations that work for imbalance or I have to do everything on basis of matrix algebra in which case I need to switch to R in stead of C. I went for the latter and had to find ways to speed up the fits. Done now, took some hundred hours. The practical (though possibly not ideal) way of looking into imbalance is to force imbalance into stage 2. I.e. we look at stage 1 as normal, then we calculate stage 2 normally and expect balance but we happen to loose e.g. 1 or 2 or whatever subjects from, say, sequence RT. That is how I am implementing it now. I am looking at my results now and trying to figure out if there is a story in this. So let me ask you experts: Let us say we look at e.g. CV=.3 and N1=12 (6 per sequence at stage 1). What would you expect in terms of power and type I error (i.e. any difference from table I in Potvin's work), if 2 subjects go lost in sequence RT during stage 2? Note: 2 RT subjects cannot go lost in stage two if there is only 1 per sequence at this stage, in which case I have programmed the loss as 1. And yes, I know, the underlying imbalance model is not ideal  much better to assume random dropouts across sequences and stages, but that is speedwise almost impossible to implement. It would simply take forever to compute it with my skills. Let me add, since stage 2's are often (not always, just often in practice) much larger than stage 1's (when N go small and CV go high) there is not much of a crime committed byu focusing on imbalance arising at the second stage. Have a good day and thanks for any input. — I could be wrong, but... Best regards, ElMaestro R's base package has 274 reserved words and operators, along with 1761 functions. I can use 18 of them (about 14 of them properly). I believe this makes me the Donald Trump of programming. 
Helmut ★★★ Vienna, Austria, 20170912 01:41 (1103 d 13:15 ago) @ ElMaestro Posting: # 17802 Views: 4,137 

Hi ElMaestro, » I am looking at my results now and trying to figure out if there is a story in this. So let me ask you experts: Let us say we look at e.g. CV=.3 and N1=12 (6 per sequence at stage 1). What would you expect in terms of power and type I error (i.e. any difference from table I in Potvin's work), if 2 subjects go lost in sequence RT during stage 2? Both the type I error and power will be lower than with a complete stage 2. Hence, if we use a framework which controls the TIE and estimate the sample size of stage 2 with an appropriate algo (1 df less than usual due to the stage term) we should not be worried about the TIE – a smaller sample size translates into a lower chance to show BE and hence, also a lower TIE. Like in fixed sample designs the impact of dropouts on power will be small. On the other hand, in TSDs it is not a clever idea to include more than the estimated n_{2} subjects (greedy CROs selling “additional subjects in order to compensate for potential loss in power due to dropouts” to sponsors). That’s a nogo because it might inflate the TIE. The literature on GSDs and adaptive methods is full of tricks adjusting the final α if the planned sample size was overrun. You don’t want to go there. Not for an old salt like you but novices: If you think about modifying one of the validated frameworks it is mandatory to assess the operational characteristics (TIE, power) in simulations. Pocock’s α 0.0294 is not a natural constant. General rules: Futility criteria lower the TIE, whereas a minimum stage 2 sample size might lead to an inflated TIE. Example: Molins et al.* modified Potvin’s methods by introducing a futility N_{max} 150 and a minimum n_{2} of 1.5×n_{1}. Their adjusted α was 0.0301 for ‘modified B’ and 0.0280 for ‘modified C’.
— Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Helmut ★★★ Vienna, Austria, 20170919 17:17 (1095 d 21:39 ago) @ ElMaestro Posting: # 17819 Views: 3,835 

Hi ElMaestro, had a closer look into the loss of power due to dropouts. » Let us say we look at e.g. CV=.3 and N1=12 (6 per sequence at stage 1). What would you expect in terms of power […], if 2 subjects go lost in sequence RT during stage 2? » » […] stage 2's are often (not always, just often in practice) much larger than stage 1's… Larger is OK but too large isn’t. With the exception of Xu’s methods (my current favorite) you don’t reach the target power with small n_{1}. ‘Potvin B’:
My current practice is to use n_{1} which is ~75–80% of the fixed sample design with a ‘bestguess’ CV. With n_{1} 30 we exceed the target power with 83.4%, only 41.8% studies proceed to the second stage (i.e., you have a 56.6% chance to show BE already in the first), and the median total sample size is 30 (!). OK, the mean is 39… The distribution looks like that: Even with a futility (N_{max} 64) final power is with 77.6% close to the target but power in the first stage is still 56.6%. With a futility on the total sample size you can not only ease the sponsor but – hopefully – the BSWP. The Type I Error drops from 0.0440 to 0.0427. I don’t like small first stages. — Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 