d_labes ★★★ Berlin, Germany, 2011-01-31 15:24 (5195 d 22:10 ago) Posting: # 6520 Views: 28,508 |
|
Dear All, in the description of group-sequential designs with interim sample size estimation Potvin et.al.1) describe a Method B. Because they use the same nominal alpha=0.0294 in both evaluation stages I wonder how their decision scheme reads if one attempt to use some more extreme alpha-spending values, say for instance O'Brien-Fleming boundaries with alpha1=0.005 and alpha2=0.048. I guess the Method B decision scheme then reads: Evaluate BE at stage 1 Q1: Do you think I have inserted the right alpha values? Q2: If yes in Q1: It may happen that in the sample size estimation step the resulting N may be lower then the sample size at stage 1, i.e. there is no stage 2 necessary. What to do then? Evaluate via stage 1 model but with alpha=alpha2=0.048 ? Q3: Do you think that the power calculation makes sense if executed with the point estimate from stage 1 instead of GMR=0.95? Q4: Do you think Method B is nearer then Method C to the sentence "... The analysis of the first stage data should be treated as an interim analysis and both analyses conducted at adjusted significance levels (with the confidence intervals accordingly using an adjusted coverage probability which will be higher than 90%) ..." from the EMA guideline? 1)Potvin D, Diliberti CE, Hauck WW, Parr AF, Schuirmann DJ, and RA Smith Sequential design approaches for bioequivalence studies with crossover designs Pharmaceut. Statist. 7/4, 245-262 (2008), DOI: 10.1002/pst.294 Online abstract — Regards, Detlew |
jdetlor ☆ 2011-02-01 18:17 (5194 d 19:17 ago) @ d_labes Posting: # 6531 Views: 26,305 |
|
Dear D Labes, ❝ Q1: Do you think I have inserted the right alpha values? In terms of stage 1/2 location, they look correct to me. ❝ Q2: If yes in Q1: It may happen that in the sample size estimation step the resulting N may be lower then the sample size at stage 1, i.e. there is no stage 2 necessary. What to do then? Evaluate via stage 1 model but with alpha=alpha2=0.048 ? Gould1 discusses the allocation of alpha between stages and specifies the critical values (and hence the alpha values) should be the same for each stage to maintain rejection consistency. The determination of alpha relates to the allocation of subjects between stages. What complicates the above with respect to Potvin et al. is the ratio of subjects is not known for the final analysis, and thus the impact on the choice of alpha would need to be evaluated through simulation. ❝ Q3: Do you think that the power calculation makes sense if executed with the point estimate from stage 1 instead of GMR=0.95? Potvin et al. does mention this. I believe they reference Cui et al.2 who determine that if this approach is applied naively, it can inflate the type I error rate up to 30%. ❝ Q4: Do you think Method B is nearer then Method C to the sentence "... The analysis of the first stage data should be treated as an interim analysis and both analyses conducted at adjusted significance levels (with the confidence intervals accordingly using an adjusted coverage probability which will be higher than 90%) ..." from the EMA guideline? They both have interim analyses; method B evaluates BE initially, whereas method C evaluates the power, then BE. I don't know if you could say that one is closer than the other. 1)Gould AL Group sequential extensions of a standard bioequivalence testing procedure. Journal of Pharmacokinetics and Biopharmaceutics 1995; 23:57-86. 2)Cui L, Hung MJ, Wang S-J Modification of sample size in group sequential clinical trials. Biometrics 1999; 55:853-857. J. Detlor |
d_labes ★★★ Berlin, Germany, 2011-02-02 10:15 (5194 d 03:19 ago) @ jdetlor Posting: # 6537 Views: 26,445 |
|
Dear J. Detlor, ❝ ❝ Q2: If yes in Q1: It may happen that in the sample size ... ❝ The determination of alpha relates to the allocation of subjects between stages. What complicates the above with respect to Potvin et al. is the ratio of subjects is not known for the final analysis, and thus the impact on the choice of alpha would need to be evaluated through simulation. The ratio of subjects between both stages is always not known at the planning stage in methods with interim sample size adaption. Therefore this is also true for the Potvin et.al. methods. Using any scheme of nominal alpha values from the classical group-sequential designs is therefore not exactly correct because they rely on same number of subjects in each stage and the impact on the overall alpha has to be shown. ❝ ❝ Q3: Do you think that the power calculation makes sense if executed with the point estimate from stage 1 instead of GMR=0.95? ❝ Potvin et al. does mention this. I believe they reference Cui et al.2 who determine that if this approach is applied naively, it can inflate the type I error rate up to 30%. Needs to be shown if this is true also for cross-over designs aimed to evaluate equivalence. Do you know any 'non-naive' application? ❝ ❝ Q4: Do you think Method B is nearer then Method C to the sentence ... ❝ They both have interim analyses; method B evaluates BE initially, whereas method C evaluates the power, then BE. I don't know if you could say that one is closer than the other. My concern was that Method C has the possibility to evaluate the data at stage 1 with alpha=0.05 if the power was great enough, i.e. without paying any penalty. I know Helmut prefers Method C and has already convinced the German BfARM ![]() This would also my preferred choice according to the reasoning of Potvin et.al. in their recommendations "Another advantage of method C is that it is designed so that if the study were found to have adequate power at the first stage, the alpha for that study would be the same as if it were designed to be single-stage". Why to pay a penalty for repeated testing, if no repeated tests are applied? But ... see the comments paper and remember the London EGA symposium (2010) discussion. Eventually we have to take Guidances here literally to make our sponsors happy? Any experience other then Helmut's? — Regards, Detlew |
jdetlor ☆ 2011-02-02 17:08 (5193 d 20:26 ago) @ d_labes Posting: # 6545 Views: 26,128 |
|
Dear D. Labes, ❝ The ratio of subjects between both stages is always not known at the planning stage in methods with interim sample size adaption. Therefore this is also true for the Potvin et.al. methods. I agree with this. What I was driving at was the sequential analysis side. Traditionally the time points are specified before hand (equally spaced or not) and the values for alpha can be calculated. ❝ Using any scheme of nominal alpha values from the classical group-sequential designs is therefore not exactly correct because they rely on same number of subjects in each stage and the impact on the overall alpha has to be shown. True, and I believe this part is what Potvin et al. were exploring — Is there a value for alpha that would maintain an appropriate type I error rate over both stages. I would point out that Gould in his paper discusses different ratios, not just where the ratio of subjects between stages is 1:1. ❝ To express my question more precisely: The power calculation in Method B is done in case of non-BE evaluated at alpha=0.0294. Eventually this is superfluous because if using the point estimate will result always in a power below that desired? I'm still not sure I follow. Are you concerned about the use of a reduced alpha (0.0294) for the power calculation, or is it with using the point estimate instead of GMR=0.95? We may or may not have less power than initially calculated due to the new variability estimate from stage 1 (using the same GMR=0.95). ❝ Needs to be shown if this is true also for cross-over designs aimed to evaluate equivalence. ❝ Do you know any 'non-naive' application? Agreed. But this depends on your definition of 'naive' ![]() ❝ My concern was that Method C has the possibility to evaluate the data at stage 1 with alpha=0.05 if the power was great enough, i.e. without paying any penalty. ❝ I know Helmut prefers Method C and has already convinced the German BfARM ❝ ❝ This would also my preferred choice according to the reasoning of Potvin et.al. in their recommendations "Another advantage of method C is that it is designed so that if the study were found to have adequate power at the first stage, the alpha for that study would be the same as if it were designed to be single-stage". Why to pay a penalty for repeated testing, if no repeated tests are applied? My thoughts as well. ❝ Any experience other then Helmut's? Sorry, nothing to add. ![]() However, if you are looking for a small number of subjects up front and a small value for alpha for evaluating BE at stage 1, you could use Gould's method. This is provided you know the maximum number of subjects you will enroll (not adaptive), and this maximum number with cover a reasonable amount of variability. J. Detlor |
d_labes ★★★ Berlin, Germany, 2011-02-02 17:25 (5193 d 20:09 ago) @ jdetlor Posting: # 6546 Views: 26,213 |
|
Dear J. Detlor, thanx for sharing your thoughts. And thanks for pointing me repeatedly to Gould's method. But I must confess that my brain was to small to understand it anyway ![]() — Regards, Detlew |
jdetlor ☆ 2011-02-02 18:02 (5193 d 19:32 ago) @ d_labes Posting: # 6547 Views: 26,340 |
|
Dear D. Labes ❝ thanx for sharing your thoughts. Not a problem. I must say thanks to you for your countless interesting and informative posts. I, and I would wager many others, have learned quite a bit from your contributions here. ❝ And thanks for pointing me repeatedly to Gould's method. I swear I have no affiliation with Gould ![]() ❝ But I must confess that my brain was to small to understand it anyway ![]() |
d_labes ★★★ Berlin, Germany, 2011-02-07 15:32 (5188 d 22:02 ago) @ d_labes Posting: # 6567 Views: 26,046 |
|
Dear All, Potvin et. al. describe the use of GMR = 0.95 in the sample size adaption step of their Methods B/C. If we assume a more extreme null ratio and want to calculate the empirical power at that GMR (via simulation): Is it reasonable / mandatory to use that GMR in the sample size adaption step? ![]() — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-02-07 16:24 (5188 d 21:10 ago) @ d_labes Posting: # 6568 Views: 26,149 |
|
Dear D. Labes, good question, next question! Let’s look at example 1 / method B/C: In stage 1 CI 104.27-134.17% (CV 14.56%) and PE 118.28%. Power for method B (α 0.0294) is reported with 75.6% (I get 76.3%) and for method C (α 0.05) is reported with 84.1% (I get 85.1%). Maybe the discrepancy comes from using Hauschke’s approximations?* Anyhow, the important point is that the calculation is based on T/R of 95% and not the actual 118.28%! I missed that completely… ![]()
Edit: My modification of Hauschke’s approximations (which gives the sample size, not power) comes up with 76.7% (α 0.0294) and 84.6% (α 0.05). No idea how Potvin et al. got their values. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2011-02-07 17:29 (5188 d 20:05 ago) @ Helmut Posting: # 6569 Views: 26,214 |
|
Dear Helmut, ❝ good question, next question! It was the best question I had at moment ![]() ❝ ... No idea how Potvin et al. got their values. According to their formula given in paragraph 2.3 they use the approximation via central shifted Student's t-distribution, shortly described in the accompanying material of package PowerTOST. Check it via the undocumented and hidden function ![]() .approx2.power.TOST(alpha=0.05, se=sqrt(0.020977), diffm=log(0.95), But what you discuss was not my point, I think. Or was your first sentence already the answer? I need to know the power of Potvin B/C if I assume 'true' GMR=1.08, 'true' CV = 0.3 for instance. What to do with the steps in the decision scheme
Stay with 0.95 or replace with 1.08? — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-02-07 17:47 (5188 d 19:47 ago) @ d_labes Posting: # 6570 Views: 25,981 |
|
Dear D. Labes! ❝ According to their formula given in paragraph 2.3 they use the approximation via central shifted Student's t-distribution, … ❝ Check it via the undocumented and hidden function … Ah, funny. ❝ But what you discuss was not my point, I think. I know - and knew. I was carried away be reading the paper again and to be honest for the first time I recalculated the examples. ❝ Or was your first sentence already the answer? Come on! ![]() ❝ I need to know the power of Potvin B/C if I assume 'true' GMR=1.08, 'true' CV = 0.3 for instance. Sure. I’m a little bit confused right now, because it seems that I misunderstood the decision procedure. Until you came up with your nice question I thought that the power calculation is done on the actual PE of stage 1 (1.08, not 0.95) and only the sample size calculation is done with 0.95 (no adaption for effect size). From the examples the former is not true – both are done with 0.95. Now I'm stuck. Potvin’s simulations were done with 0.95; is it that simple to use e.g. 0.90 instead? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2011-02-07 18:06 (5188 d 19:28 ago) @ Helmut Posting: # 6571 Views: 25,965 |
|
Hi HS, ❝ Sure. I’m a little bit confused right now, because it seems that I misunderstood the decision procedure. Until you came up with your nice question I thought that the power calculation is done on the actual PE of stage 1 (1.08, not 0.95) and only the sample size calculation is done with 0.95 (no adaption for effect size). From the examples the former is not true – both are done with 0.95. Now I’m stuck. Potvin’s simulations were done with 0.95; is it that simple to use e.g. 0.90 instead? I am no expert here at all. My impression was that ordinarily a semi-blinded review of the data is done where the apparent variation is calculated and used given an assumption of the PE. I always thought about it like Heisenberg's uncertainty principle. You can't know both the PE and the CV at the interim stage. If the PE (even though not calculated) actually deviates a lot from the assumption, then the apparent CV becomes large. — Pass or fail! ElMaestro |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-02-07 18:17 (5188 d 19:17 ago) @ ElMaestro Posting: # 6572 Views: 26,170 |
|
My Capt'n! ❝ I am no expert here at all. ![]() ❝ My impression was ... Sorry, here we have to put aside the glasses ![]() ![]() Quote from the paper:
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2011-02-07 19:35 (5188 d 17:59 ago) @ Helmut Posting: # 6573 Views: 26,107 |
|
Dear HS, ❝ No requirement for blinding to treatment effect at each stage, since data would routinely be unblinded in pharmacokinetic (PK) BE studies. That's a pretty good point to raise given my previous post ![]() — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-08 10:07 (5188 d 03:27 ago) @ Helmut Posting: # 6579 Views: 26,159 |
|
Dear Helmut! First I'm very glad that I helped you to gnosis ![]() ❝ Potvin's simulations were done with 0.95; is it that simple to use e.g. 0.90 instead? Here the simul'ants answer: Datasets simulated with CV=32%, GMR=1.08 Results taken Potvin literally (i.e. GMR=0.95 in power calculations and sample size adaption): ntotal Results with GMR=0.95 always replaced with 1.08: ntotal Which would you take ![]() BTW: 100 000 simuls each. Total simul'ants time approx. 9.5 hours (with some clumsy R code). — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-02-13 11:57 (5183 d 01:37 ago) @ d_labes Posting: # 6622 Views: 25,941 |
|
hi dlabes ❝ BTW: 100 000 simuls each. Total simul'ants time approx. 9.5 hours (with some clumsy R code). I finally managed to get some working code up and running too for this. 1.000.000 sims at CV=0.3 and n1=36 method B takes [Praise C] 45 seconds [/Praise C] ![]() ![]() ![]() I think there is some work to be done here. They seem to use a funny way to calculate power; I bet the magnificent package power.TOST can do it better. — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-15 15:06 (5180 d 22:28 ago) @ ElMaestro Posting: # 6624 Views: 25,815 |
|
Dear Old Pirate! ❝ 1.000.000 sims at CV=0.3 and n1=36 method B takes [Praise C]45 seconds[/Praise C] Wow! That's really fast! ![]() ❝ ... They seem to use a funny way to calculate power; I bet the magnificent package power.TOST can do it better. Of course ![]() Do you think that your C-code can made interacting with R to facilitate such things as other methods for sample size adaption, other nominal alpha values for the two stages, introduction of a maximum number of subjects ... ? — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-02-15 18:39 (5180 d 18:55 ago) @ d_labes Posting: # 6628 Views: 25,870 |
|
Ahoy d_labes, ❝ Of course I think you are right about that. I looked at the difference in power estimates in exactly that region of 80% and it comes down to decimals between the different methods. However, I do not have an understanding of the way Potvin et al. derive their alpha values - if you could enlighten me I would really appreciate it. I would imagine them to backcalculate alpha from the total simulated power using their power formula on pg 252, but at the moment my numbers do not fit. Example method B: We know that at alpha=0.0294 at each stage, CV=0.3, and n1=36 power is found to be 0.8379 and in the simulations avgN=40.7. I cannot make those numbers fit the overall alpha of 0.0397 (within rounding boundaries). ❝ Do you think that your C-code can made interacting with R to facilitate such things as other methods for sample size adaption, other nominal alpha values for the two stages, introduction of a maximum number of subjects ...? That would be a nice objective. Can't imagine it would be difficult as such to make it interact; I have previously written code that could shell-execute R and do whatever R script the C code asks for, including dumping R output to a textfile, but it takes a lot of time to write such code. It comes down to having the time to do it, I think. — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-16 10:18 (5180 d 03:16 ago) @ ElMaestro Posting: # 6631 Views: 25,929 |
|
Ahoy ElMaestro, ❝ However, I do not have an understanding of the way Potvin et al. derive their alpha values - ... Here we are already two ![]() As far as I understood: The alpha's are taken from the literature about group-sequential designs, especially the Pocock boundaries. See this nice presentation for an illustration. The boundaries and alphas were derived based on
In that sense the alphas are purely empirical, I think. The real alphas you can take from your simulations. Count the accepted simulated studies at each stage for the simulations with GMR=1.25 or 0.80. So far the opinion of an "interested amateur" (cit. HS) in statistics. — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-02-16 10:32 (5180 d 03:02 ago) @ d_labes Posting: # 6632 Views: 26,003 |
|
Dear d_labes, Thanks for the explanations; my question was regarding the 'real alpha'. My brain is too small for this game. ❝ The real alphas you can take from your simulations. Count the accepted simulated studies at each stage for the simulations with GMR=1.25 or 0.80. I am not sure what is meant here. Could you elaborate or give an example? Many thanks. — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-16 11:05 (5180 d 02:29 ago) @ ElMaestro Posting: # 6633 Views: 25,707 |
|
Dear ElMaestro, Simulate with ratio 1.25 and a CV and count BE accepted will give you the overall alpha. Right? Save if BE was accepted (or not) in stage 1 or stage 2. Counts of BE accepted will give you the alpha's, I think. — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-02-16 13:07 (5180 d 00:27 ago) @ d_labes Posting: # 6636 Views: 25,757 |
|
Dear d_labes, ❝ Simulate with ratio 1.25 and a CV and count BE accepted will give you the overall alpha. Right? ❝ Save if BE was accepted (or not) in stage 1 or stage 2. Counts of BE accepted will give you the alpha's, I think. Thanks for this suggestion. If I use n1=36 CV=.3 and T/R 0.8 or 1.25 then the % of acceptance goes towards 0.0294 apparently with 97.1% going into st 2. Potvin et al. mention only 81.1% go into st 2, which to me intuitively seems low. Bothers me a wee bit; any idea how to proceed? — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-16 14:45 (5179 d 22:49 ago) (edited on 2011-02-16 15:15) @ ElMaestro Posting: # 6637 Views: 25,747 |
|
Ahoy my dear Captn, my crowdy R code gives for n1=36, CV=30% (but only 100 000 sims): ratio=1.25 (->alpha) Seems nearly the result of Potvin et. al. — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-02-08 00:44 (5188 d 12:50 ago) @ d_labes Posting: # 6575 Views: 25,920 |
|
Ahoy dlabes, I am simulating a lot these days, and am trying to reproduce Potvin's table 1. For Potvin's method B I cannot, however, see the reason behind p252 bottom right where they show a calculation of s22. Why not use something ähnlich to the formula for s12 but summed across all observations in in the entire dataset after inclusion of new subjects? — Pass or fail! ElMaestro |
d_labes ★★★ Berlin, Germany, 2011-02-08 09:36 (5188 d 03:58 ago) @ ElMaestro Posting: # 6578 Views: 26,229 |
|
Ahoy my Captn, ❝ I am simulating a lot these days ... Simulants of the world unite! ![]() ❝ For Potvin's method B I cannot, however, see the reason behind p252 bottom right where they show a calculation of s22 ... IMHO this is for speed reasons only. I think they won't recalculate s22 using all data but use SS1 they already have and the other terms arising only from the new data of subjects in stage 2. But I doubt if this makes a great difference regarding the simulation speed. But "Kleinvieh macht auch Mist". Many a mickle macks a muckle. (Scot.) — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-04-18 14:58 (5118 d 23:36 ago) @ d_labes Posting: # 6915 Views: 25,659 |
|
Dear all, the preprint of a new paper is available: Montague TH, Potvin D, DiLiberti CE, Hauck WW, Parr AF, Schuirmann DJ. Additional results for 'Sequential design approaches for bioequivalence studies with crossover designs' Pharm Stat. 2011. 10.1002/pst.483 Methods B and C (α 0.0294) showed non-negligible inflation of the Type I error rate (i.e., >5.2%) for a PE of 0.90 (instead of 0.95 in the 2008 paper). Method D (α 0.028) preserves the overall α and is therefore recommended. It might be possible to adjust the α level for methods B/C as well, but this has not been evaluated yet. Also: "[…] we have not considered cases where the desired power used in the method is anything other than 80%. Increasing the power used in the methods from 80 to 90% would be expected to have similar direction of effect as decreasing the GMR used in the methods from 0.95 to 0.90. Ongoing work is seeking to generalize the methods considered in this paper and the earlier paper." — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2011-04-19 10:58 (5118 d 03:36 ago) @ Helmut Posting: # 6918 Views: 25,557 |
|
Dear Helmut, dear all, ❝ ... It might be possible to adjust the α level for methods B/C as well, but this has not been evaluated yet. This sentence of the paper is confusing me somewhat ![]() As far as I had understood the method D is a variant of method C with adjusted alpha level. Moreover I don't share the authors view of "excessive inflation of Type I error rate". In view of other Type I inflation allowed by the FDA, f.i. the inflation around CV=30% for the scaled ABE up to 6.5%-7% due to the regulatory sw0=0.25 (see this presentation, page 23) I would consider the inflations observed in the paper (max. 0.0547) marginal. What makes me also wondering is that the alpha "inflation" is shifted to higher stage 1 number of subjects if the variability increases. This is somehow counter-intuitive to me. Small is beautiful? — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-04-19 16:30 (5117 d 22:04 ago) @ d_labes Posting: # 6921 Views: 25,662 |
|
Dear D. Labes, ❝ ❝ ... It might be possible to adjust the α level for methods B/C as well, but this has not been evaluated yet. ❝ ❝ This sentence of the paper is confusing me somewhat ❝ As far as I had understood the method D is a variant of method C with adjusted alpha level. This sentence is my intepretation. B/C are based on Pocock’s 0.0294. You are right about method D. The original quote is: Although methods B and C, with the assumed GMR of 0.95 for the estimation of power and second-stage sample size as described in our first paper, in our opinion, provide acceptable control of the Type I error rate, changing the assumed GMR from 0.95 to 0.90 with these methods results in excessive inflation of the Type I error rate under some circumstances, and should be avoided. It is possible that by adjusting the α levels (α spending function) Methods B and C could be modified so as to control the Type I error rate with an assumed GMR of 0.90, but that is beyond the scope of this investigation. ❝ Moreover I don't share the authors view of "excessive inflation of Type I error rate". Of course the criterion the authors have set for an "acceptable" inflation (i.e., ≤5.2%) is arbitrary. ❝ In view of other Type I inflation allowed by the FDA, f.i. the inflation around CV=30% for the scaled ABE up to 6.5%-7% due to the regulatory sw0=0.25 (see this presentation, page 23) I would consider the inflations observed in the paper (max. 0.0547) marginal. Me too. On the other hand, I’m not sure whether EMA cares about the FDA. ![]() If this approach is adopted appropriate steps must be taken to preserve the overall type I error of the experiment and the stopping criteria should be clearly defined prior to the study. Bonus questions:
❝ What makes me also wondering is that the alpha "inflation" is shifted to higher stage 1 number of subjects if the variability increases. This is somehow counter-intuitive to me. Small is beautiful? Yes, that’s strange. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2011-04-19 16:49 (5117 d 21:45 ago) @ d_labes Posting: # 6922 Views: 25,462 |
|
Dear d_labes, ❝ What makes me also wondering is that the alpha "inflation" is shifted to higher stage 1 number of subjects if the variability increases. This is somehow counter-intuitive to me. Small is beautiful? Try and plot type I error rate as function of CV for different levels of N1 under a fixed T/R (0.9 is a good choice). I bet ten belgian drachmers that you will be surprised... Dear HS, ❝ There’s no stopping criterion in Potvin’s method. My protocols stated clearly that there is none in the method and were approved by the BfArM and two IECs. Experiences? Potvin's method B/C/D contain a stopping criterion. This is when you after stage 1 realise it's game over. — Pass or fail! ElMaestro |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-04-19 17:28 (5117 d 21:06 ago) @ ElMaestro Posting: # 6924 Views: 25,634 |
|
Dear ElMaestro! ❝ ❝ There’s no stopping criterion in Potvin’s method. My protocols stated clearly that there is none in the method and were approved by the BfArM and two IECs. Experiences? ❝ ❝ Potvin's method B/C/D contain a stopping criterion. This is when you after stage 1 realise it's game over. Agree. What IECs are used to (from phase III studies) is a maximum number of subjects, i.e., some kind of futility rule - which is not included in Potvin’s method. Sponsors may run out of steam realising that after stage 1 in 24 subjects they would have to go with another 96 in stage 2. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |