Rocco_M ☆ Mexico, 2019-09-03 20:45 (1857 d 00:47 ago) Posting: # 20537 Views: 8,041 |
|
Hi all, I have a question about initial input for CV into a sample size calculation comparing two formulations when all that has been run is a single study, say, a first in man. Since it is not correct to use sd/mean from the first formulation as your CV, but if it is all you have (say, you measured log(AUC) from ten patients for a single formulation (ie, only for the reference), what would you input as CV for a subsequent parallel (say) study between this formulation and another one not yet available) in order to estimate sample size? I see people will typically enter sample sd / sample mean of the single formulation, but I feel this is incorrect. |
Helmut ★★★ Vienna, Austria, 2019-09-05 16:05 (1855 d 05:27 ago) @ Rocco_M Posting: # 20543 Views: 7,298 |
|
Hi Rocco, ❝ I see people will typically enter sample sd / sample mean of the single formulation, but I feel this is incorrect. You are right. PK metrics like AUC and Cmax follow a lognormal distribution and hence, arithmetic means and their SDs / CVs are wrong (i.e., are positively biased). If you plan for a parallel design you should use the geometric CV. $$\overline{x}_{log}=\frac{\sum (log(x_i))}{n}$$ $$\overline{x}_{geo}=\sqrt[n]{x_1x_2\ldots x_n}=e^{\overline{x}_{log}}$$ $$s_{log}^{2}=\frac{\sum (log(x_i-\overline{x}_{log}))}{n-1}$$ $$CV_{log}=\sqrt{e^{s_{log}^{2}}-1}$$ Only if you don’t have access to the raw data, you would need simulations. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-06 01:49 (1854 d 19:43 ago) @ Helmut Posting: # 20545 Views: 7,181 |
|
Thanks. But what I Do not understand is, is it even reasonable to use the geometric CV for the reference? Isn’t the CV you want to input into a sample calculation the CV corresponding to the difference of the Test and Reference? Since you do not have the Test group here, I am a bit confused as to what using the geometric CV of only the reference tells you. In your slides, you have a lone that says “if you have only mean and sd of the reference, a pilot study is unavoidable.” What am I missing? Thanks. Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] |
Helmut ★★★ Vienna, Austria, 2019-09-06 02:15 (1854 d 19:17 ago) @ Rocco_M Posting: # 20546 Views: 7,290 |
|
Hi Rocco, ❝ […] is it even reasonable to use the geometric CV for the reference? Isn’t the CV you want to input into a sample calculation the CV corresponding to the difference of the Test and Reference? Since you do not have the Test group here, I am a bit confused as to what using the geometric CV of only the reference tells you. In your slides, you have a lone that says “if you have only mean and sd of the reference, a pilot study is unavoidable.” ❝ ❝ What am I missing? Oh dear, my slides always give only half of the picture (my is missing)… Let’s start from a 2×2×2 crossover. We have the within-subject variabilities of T and R (CVwT and CVwR). Since this is not a replicate design, they are not accessible and pooled into the common CVw.1 One of the assumptions in ANOVA are identical variances. If they are not truly equal (say CVwT < CVwR) the CI is inflated: The “good” T is punished by the “bad” R. Similar in a parallel design. You can assume that CVwT = CVwR and CVbT = CVbR and therefore, use the (pooled, total) CVp of your FIM study. Here it is the other way ’round (you have the CVp of R). If the CV of T is higher, bad luck, power compromised. If both are ~ equal, fine. If it is lower, you gain power. There is no free lunch. If you are cautious: Pilot study or a Two-Stage-Design.2 For the latter I recommend the function power.tsd.p() of package Power2Stage for . A reasonable stage 1 sample size is ~80% of what you estimate with sampleN.TOST(alpha=0.05...) of package PowerTOST and the CVp of your FIM.
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-06 19:49 (1854 d 01:43 ago) @ Helmut Posting: # 20548 Views: 7,089 |
|
Thanks. So basically your analysis follows from the fact that the variance of the difference of T and R equal the sum of the variance of T and the variance of R, correct? And you are using the geometric CV as the estimate of CVp for R? Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] |
Helmut ★★★ Vienna, Austria, 2019-09-06 20:15 (1854 d 01:17 ago) @ Rocco_M Posting: # 20549 Views: 7,197 |
|
Hi Rocco, ❝ So basically your analysis follows from the fact that the variance of the difference of T and R equal the sum of the variance of T and the variance of R, correct? Well, you have four variance components (s²wR, s²wT, s²bT, s²bR). Then
❝ And you are using the geometric CV as the estimate of CVp for R? Yes. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2019-09-06 21:03 (1854 d 00:29 ago) @ Helmut Posting: # 20550 Views: 7,114 |
|
Hi Hötzi, ❝ 3. 2 group parallel ❝ Only the pooled (total) s²p. With a tricky mixed-effects model you could ❝ get s²pT and s²pR. 2 group parallel: This tricky model may not be so tricky after all, but may be overkill. I believe you will get the same result as you would obtain from doing a plain sample standard deviation on T and R subsets, respectively. — Pass or fail! ElMaestro |
Helmut ★★★ Vienna, Austria, 2019-09-07 11:57 (1853 d 09:34 ago) @ ElMaestro Posting: # 20551 Views: 7,060 |
|
Hi ElMaestro, ❝ ❝ 3. 2 group parallel ❝ ❝ Only the pooled (total) s²p. With a tricky mixed-effects model you ❝ ❝ could get s²pT and s²pR. ❝ ❝ This tricky model […] may be overkill. I believe you will get the same result as you would obtain from doing a plain sample standard deviation on T and R subsets, respectively. KISS. You are absolutely right. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-09 15:31 (1851 d 06:01 ago) @ Helmut Posting: # 20557 Views: 7,032 |
|
My apology. Not exactly sure I understand what other poster be saying here 100%. If you be plan a parallel based on FIM: if you have one pooled variance from FIM, is it enough to enter into a sample size calculation? Or does one also need have assumption for pooled variance of other arm, and then take calculated variance of difference based on the FIM variance and assumed other arm pooled variance? This is what I do not understand. -RoccoM. Mexico Edit: Please don’t shout! [Helmut] |
Helmut ★★★ Vienna, Austria, 2019-09-09 16:09 (1851 d 05:22 ago) @ Rocco_M Posting: # 20558 Views: 6,984 |
|
Hi Rocco, see what I wrote above. Since you have no idea about the new formulation you have to assume indeed that the variances of T and R are at least similar. If you don’t like this assumption, there are just two ways out:
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-09 18:35 (1851 d 02:57 ago) @ Helmut Posting: # 20560 Views: 6,988 |
|
Gracias and sorry for confusion. Here is what I think I am not getting. When you run a FIM, you do not have two populations. You have a treatment and (perhaps) a control. So if you have a geometric CV, say, CV1 calculated from that one treatment sample in the FIM. [It is not pooled variance, is it? There is only one sample] there and then use it to design a parallel follow-up, do you need to assume that the CV of the other arm, call it CV2, is equal to CV1, and then CVp = pooled(CV1,CV2) as input into sample size formula, eg, sampleNTOST? Much apology if I am missing this point. I do not seem to understand what are the components of the pooled variance that go into the sample size computation. |
Helmut ★★★ Vienna, Austria, 2019-09-09 19:19 (1851 d 02:12 ago) @ Rocco_M Posting: # 20562 Views: 6,944 |
|
¡Hola Rocco! ❝ When you run a FIM, you do not have two populations. You have a treatment and (perhaps) a control. So if you have a geometric CV, say, CV1 calculated from that one treatment sample in the FIM. Correct, so far. ❝ It is not pooled variance, is it? There is only one sample… Here you err. Although we have just one sample, we have two variances, between- (that’s obvious) and within-subjects (not so obvious). The fact that you administered the drug on one occasion does not mean the within-subject disappears. Administer the same drug the next day and you will get different concentrations. Hence, within-subject variance is always there, we can only not estimate it. We get only the total (or pooled) variance. Given, generally CVb > CVw but there are cases where it is the other way ’round. We simply don’t know. ❝ … there and then use it to design a parallel follow-up, do you need to assume that the CV of the other arm, call it CV2, is equal to CV1,… Yes. ❝ … and then CVp = pooled(CV1,CV2) as input into sample size formula, eg, sampleNTOST? Wait a minute. CV2 is unknown until we performed the parallel study. Therefore, simply plug in the one you found in the FIM study. ❝ […] I do not seem to understand what are the components of the pooled variance that go into the sample size computation. See my previous post, esp. case #4. We have one variance, which is pooled from s²w and s²b. Maybe the terminology is confusing. Pooling does not mean that we have the individual components. We know only the result and there is is an infinite number of combinations which gives the same result. However, that’s not important. In planning the parallel design you need only the CV1 and have to assume that CV2=CV1. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-13 23:28 (1846 d 22:04 ago) @ Helmut Posting: # 20595 Views: 6,812 |
|
Thank you so much I think it makes sense. Just one last question. Where does the formula on slide 10.83 in bebac.at/lectures/Leuven2013WS2.pdf for CI for parallel design come from? I cannot seem to find reference anywhere. Gracias! |
Helmut ★★★ Vienna, Austria, 2019-09-14 02:16 (1846 d 19:16 ago) @ Rocco_M Posting: # 20596 Views: 6,912 |
|
Hi Rocco, ❝ Where does the formula on slide 10.83 in bebac.at/lectures/Leuven2013WS2.pdf for CI for parallel design come from? I cannot seem to find reference anywhere. Honestly, I don’t remember why I simplified the commonly used formula. Algebra:$$s\sqrt{\tfrac{n_1+n_2}{n_1n_2}}=\sqrt{s^2(1/n_1+1/n_2)}\;\tiny{\square}$$ Comparison with the data of the example.
In the -function t.test() var.equal = FALSE is the default because:
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Rocco_M ☆ Mexico, 2019-09-18 00:02 (1842 d 21:30 ago) @ Helmut Posting: # 20608 Views: 6,653 |
|
Okay gracias. That was a brain fart on my end. I now see that it is equivalent to 1/n1 + 1/n2. One last question. If I use sampleN.tost and input sampleN.TOST(CV=.3, theta0=1.0, theta1= 0.8, theta2=1.25, logscale=TRUE, alpha=0.05, targetpower=0.9, design="parallel") I get sample size minimum be 78 total. But then if I run CI.BE(pe=1.0, CV=.3, design="parallel", n=24) I get CI = approx [0.81, 1.23]. This confuse me. Shouldn’t I need to enter n at least 78 in order to get CI within [.8, 1.25]? Maybe I confuse concepts. In other words, what are implications if study *meet* bioequivalence but is underpowered? |
Helmut ★★★ Vienna, Austria, 2019-09-18 13:04 (1842 d 08:27 ago) @ Rocco_M Posting: # 20609 Views: 6,627 |
|
Hi Rocco, ❝ sampleN.TOST(CV=.3, theta0=1.0, theta1= 0.8, theta2=1.25, logscale=TRUE, alpha=0.05, targetpower=0.9, design="parallel") ❝ I get sample size minimum be 78 total. Correct.1 ❝ But then if I run CI.BE(pe=1.0, CV=.3, design="parallel", n=24) ❝ I get CI = approx [0.81, 1.23]. Correct again. ❝ This confuse me. Shouldn’t I need to enter n at least 78 in order to get CI within [.8, 1.25]? Nope. CV and theta0 in sampleN.TOST() are assumptions (before the study), whereas CV and pe in CI.BE() are realizations (observations in the study). I’m not a friend of post hoc (a posteriori, restrospective) power but let’s check that:
❝ In other words, what are implications if study *meet* bioequivalence but is underpowered? None. Sample size estimation is always based on assumptions. If they turn out to be wrong (higher CV, PE worse than theta0, more dropouts than anticipated), you might still meet BE by luck. As ElMaestro once wrote: Being lucky is not a crime. But any confirmatory study (like BE) requires an appropriate sample size estimation. There are two problems which
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |