Helmut ★★★ Vienna, Austria, 20161230 01:22 (1587 d 22:19 ago) Posting: # 16908 Views: 5,547 

Dear all, on a recent occasion… We know that the minimum n_{2} = 2 as required in the Q&A document is meaningless. Either a study stops in the first stage or it continues with at least two subjects anyway. However, do not go further unless you know what you are doing. If you require a minimum stage 2 sample size all studies where a smaller sample size would already be sufficient to demonstrate BE with the target power are now forced to this size. Higher sample size ⇒ more degrees of freedom ⇒ narrower CI ⇒ higher probability to pass BE. In other words, the TIE will also increase and one would have to use a lower adjusted α. To the right an example what would happen if one modifies Potvin’s Methods B and C at the location (n_{1} 12, CV 20%) of the maximum TIE and naïvely applies the ‘natural constant’ α 0.0294. Not a very good idea. Own simulations are mandatory in order to find a suitable α! — Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
ElMaestro ★★★ Denmark, 20161230 12:12 (1587 d 11:29 ago) @ Helmut Posting: # 16910 Views: 4,723 

Hi Helmut, I am sure you are right but I can't follow you, I mean can't readily understand what question you tried to answer. So let me ask the forbidden question: "Can you reformulate?" » Higher sample size ⇒ more degrees of freedom ⇒ narrower CI ⇒ higher probability to pass BE. » In other words, the TIE will also increase and one would have to use a lower adjusted α. This is one thing I did not get. Does that logic also work when we simulate true GMR 0.8 or 1.25 for type I error? I find it hard to convince myself. Somehow I guess regulators just wanted to say that inclusion of a single subject in stage 2 would not be ok. They are right and that is not rocket science. — Pass or fail! ElMaestro 
Helmut ★★★ Vienna, Austria, 20161230 14:01 (1587 d 09:39 ago) @ ElMaestro Posting: # 16911 Views: 4,810 

Hi ElMaestro, » So let me ask the forbidden question: "Can you reformulate?" » » » Higher sample size ⇒ more degrees of freedom ⇒ narrower CI ⇒ higher probability to pass BE. » » In other words, the TIE will also increase and one would have to use a lower adjusted α. » » This is one thing I did not get. I’ll give two examples. Both at the location (n_{1} 12, CV 20%) of the maximum TIE. Simulating for power (at 0.95):
» Does that logic also work when we simulate true GMR 0.8 or 1.25 for type I error? I find it hard to convince myself. Yes, it does – and this was my point. This time simulating for the TIE (at 1.25):
» Somehow I guess regulators just wanted to say that inclusion of a single subject in stage 2 would not be ok. They are right and that is not rocket science. I think not to perform the second stage with one subject is a nobrainer. I guess that two was a compromise. AFAIK, Alfredo suggested 12 subjects to the BSWP.*
— Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
ElMaestro ★★★ Denmark, 20161230 17:19 (1587 d 06:21 ago) @ Helmut Posting: # 16913 Views: 4,667 

Hi Helmut, I am still lost, I must confess. Perhaps it is because I am not using the power2Stage package at all. » I’ll give two examples. Both at the location (n_{1} 12, CV 20%) of the maximum TIE. » Simulating for power (at 0.95):
Should this not be min.n2=2 ? Or is it the "same difference"??Do you think you have it in your heart to explain in slow motion to a dimwit like me who read your posts quite a few times which point you are trying to prove or investigating? Otherwise I am afraid I will need to question you next time we meet f2f. And that might not be in the distant future — Pass or fail! ElMaestro 
Helmut ★★★ Vienna, Austria, 20161230 18:00 (1587 d 05:40 ago) @ ElMaestro Posting: # 16914 Views: 4,709 

Hi ElMaestro, » » Simulating for power (at 0.95):
» Should this not be min.n2=2 ?Nope. This is the original Potvin ‘Method B’. There is no minimum n_{2} in the paper, right? However, the functions in Power2Stage are constructed in such a way that (in crossover TSDs) any estimated sample size has to be an even number. If one states min.n2=1 it will automatically rounded up to 2. Same goes with sampleN.TOST() . Hence, to state min.n2=2 is a waste of time (see also the footnote to this post).» Do you think you have it in your heart to explain in slow motion to a dimwit like me who read your posts quite a few times which point you are trying to prove or investigating? I’ll try. Without a minimum n_{2} what would happen in a study which – following the conditions of the framework – could proceed to the second stage? n_{2} could be any even number. Say we had n_{1} 24 and estimate the total sample size N (for stage 1 CV, assumed GMR and target power) with 30. Hence, n_{2} 6. If we mandate n_{2} = max(n_{2} = 1.5n_{1}, N–n_{1}) we have to perform the second stage in 36 instead of 6. In the pooled analysis we will have 60 subjects instead of 30. Much higher power (nice for wealthy sponsors) but not so nice if we look at the TIE. Since the final size is twice as large, the chance to pass BE (at 1.25) will be larger as well. Even if we keep everything equal the DFs come into play. Therefore, the ‘original’ adjusted α might not sufficiently control the TIE – and we would need a lower one. That’s pure reasoning (wetware). » Otherwise I am afraid I will need to question you next time we meet f2f. And that might not be in the distant future Really? Great! — Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
ElMaestro ★★★ Denmark, 20161230 18:50 (1587 d 04:51 ago) @ Helmut Posting: # 16915 Views: 4,605 

Ah, got it, thanks Helmut, » (...a bunch of blah blah blah...) » That’s pure reasoning (wetware). I think you are saying that:
It is tempting to say power increases with sample size, and since type I error is a kind of power, this is the logic behind the observation. I think the issue is somewhat more complex than just that. These twostage thingies are funny objects that defy all kinds of logic. Does it change anything though?? I mean you and I both argued in the past that universally functional alpha's do not exist, so whenever someone makes a smart/clever/sophisticated/dumb/intelligent/braindead amendment to Potvin B or C etc, then simulations should always be undertaken to make sure the type I error is not compromised. — Pass or fail! ElMaestro 
Helmut ★★★ Vienna, Austria, 20161230 19:00 (1587 d 04:41 ago) @ ElMaestro Posting: # 16916 Views: 4,643 

Hi ElMaestro, now you got it! » – the alpha_{2} that works for n_{2,min}=2 or 0 or 1 or whatever is not necessarily the alpha_{2} that works for n_{2,min}=1.5*n_{1}, all other factors being equal. Yes. » It is tempting to say power increases with sample size, and since type I error is a kind of power, this is the logic behind the observation. Yes. » I think the issue is somewhat more complex than just that. These twostage thingies are funny objects that defy all kinds of logic. Maybe and yes. » Does it change anything though?? For me, no. » I mean you and I both argued in the past that universally functional alpha's do not exist, so whenever someone makes a smart/clever/sophisticated/dumb/intelligent/braindead amendment to Potvin B or C etc, then simulations should always be undertaken to make sure the type I error is not compromised. Exactly. To quote Jones and Kenward.* […] before using any of the methods […], their operating characteristics should be evaluated for a range of values of n_{1}, CV and true ratio of means that are of interest, in order to decide if the Type I error rate is controlled, the power is adequate and the potential maximum total sample size is not too great. (my emphases)
— Diftor heh smusma 🖖 Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 