daryazyatina Junior Ukraine, 20170803 09:34 Posting: # 17646 Views: 1,891 

Hi, guys! Today I have a new question related to the the sample size formula for average bioequivalence. Before now I сalculated the sample size for ABE by formula: (described in the book SheinChung Chow, Jun Shao and Hansheng Wang "Sample size calculations in clinical research" Copyright © 2003 by Marcel Dekker (Chapter 10. Bioequivalence Testing) http://www.crcnetbase.com/doi/book/10.1201/9780203911341) I decided to try to calculate the sample size in R with package "PowerTOST" and function sampleN.TOST() Comparing the results of the calculation in R and calculation with formula SheinChung Chow, Jun Shao and Hansheng Wang, I realized that they are different. I looked at the articles and books that this function refers to, and understand that different formulas are used. On this basis, I had a question. What formula should I use to calculate the sample size for the ABE using two onesided tests? 
BEproff Senior Russia, 20170803 09:55 @ daryazyatina Posting: # 17647 Views: 1,716 

Hi darya, Results must be different because PowerTOST uses iterative method for calculation. AFAIK, PowerTOST is used worldwide and no claims from regulators have been received. 
daryazyatina Junior Ukraine, 20170803 10:08 @ BEproff Posting: # 17648 Views: 1,707 

Hi, BEproff. Thank you for your answer. » Results must be different because PowerTOST uses iterative method for calculation. Have you looked at the formula what I used? There is also used an iterative method. » AFAIK, PowerTOST is used worldwide and no claims from regulators have been received. It's good. But I asked another question. I used formula from book: » » SheinChung Chow, Jun Shao and Hansheng Wang "Sample size calculations in clinical research" Copyright © 2003 by Marcel Dekker (Chapter 10. Bioequivalence Testing) and also any claims from regulators have not been received. 
ElMaestro Hero Denmark, 20170803 10:17 @ daryazyatina Posting: # 17649 Views: 1,717 

Hi Daryazyatina, » I decided to try to calculate the sample size in R with package "PowerTOST" and function sampleN.TOST() Good choice » Comparing the results of the calculation in R and calculation with formula SheinChung Chow, Jun Shao and Hansheng Wang, I realized that they are different. I looked at the articles and books that this function refers to, and understand that different formulas are used. Can you tell what your design is and which values you want to plug in for the calculation? If the difference is 2 or 4 subjects, then so be it, subtle differences in approximation may account for that. If the difference is 46 or something then I'd wonder, too. I am sure there is an explanation and that your confidence in the powerTOST package can easily be restored. Apart from that you are of course right if you intended to hint that the author of the power.TOST family of R functions is a dubious character — I could be wrong, but… Best regards, ElMaestro  since June 2017 having an affair with the bootstrap. 
daryazyatina Junior Ukraine, 20170803 10:58 @ ElMaestro Posting: # 17650 Views: 1,690 

Hi ElMaestro, » Good choice Thank you » Can you tell what your design is and which values you want to plug in for the calculation? If the difference is 2 or 4 subjects, then so be it, subtle differences in approximation may account for that. If the difference is 46 or something then I'd wonder, too. I am sure there is an explanation and that your confidence in the powerTOST package can easily be restored. For comparison, I use all the standard properties. Design 2x2, confidence intervals 0.8  1.25, power 0.8, alpha 0.05. The only thing about what I'm not sure is CV. Because in formula that I used this is intrasubject variability, but in PowerTOST() this is coefficient of variation as ratio. In calculations in both cases I used СV  0.3. And this is results from sampleN.TOST() sampleN.TOST(logscale = TRUE, CV = 0.3, details = TRUE) +++++++++++ Equivalence test  TOST +++++++++++ The sample size by the formula I used: n=28 » Apart from that you are of course right if you intended to hint that the author of the power.TOST family of R functions is a dubious character. 
DavidManteigas Regular Portugal, 20170803 12:40 @ daryazyatina Posting: # 17651 Views: 1,667 

Hi daryazyatina, For some reason, I can't see the image with the formula that you put on the first post. » For comparison, I use all the standard properties. Design 2x2, confidence intervals 0.8  1.25, power 0.8, alpha 0.05. » The only thing about what I'm not sure is CV. Because in formula that I used this is intrasubject variability, but in PowerTOST() this is coefficient of variation as ratio. In calculations in both cases I used СV  0.3. Within subject standard deviation and within subject CV are different parameters. Nevertheless, I think that this is not the only reason for such a big difference. There is a sentence in one of the articles that is quoted on SampleNTOST formula that may clarify this issue: "This formula is less conservative than Formula (5), but it may result in a lower actual power than the required. For example, when α = 0.05, σ = 0.3, Δ = 0.2, θ = 0.01 and a required power = 0.80, the sample size from Formula (6) [Formula from Chow] is 17 per sequence, but the actual power obtained by this sample size is only 0.69." So by reading this, I am not sure if Chow formula might be appropriate to calculate sample size for BABE trials. I have just quickly read the article, so I may not be doing a proper analysis. Perhaps dlabes might clarify this, since he is the master that we all should thank for the amazing PowerTOST package 
daryazyatina Junior Ukraine, 20170803 13:46 @ DavidManteigas Posting: # 17652 Views: 1,612 

Hi, DavidManteigas » For some reason, I can't see the image with the formula that you put on the first post. Look again at the picture with formula. At first it was not visible, now you can see it. » Within subject standard deviation and within subject CV are different parameters. Nevertheless, I think that this is not the only reason for such a big difference. I did not write anything about the standard deviation. » "This formula is less conservative than Formula (5), but it may result in a lower actual power than the required. For example, when α = 0.05, σ = 0.3, Δ = 0.2, θ = 0.01 and a required power = 0.80, the sample size from Formula (6) [Formula from Chow] is 17 per sequence, but the actual power obtained by this sample size is only 0.69." If you looked at the formula I use, you would see in the article that this is another formula Chow. And that's why I do not understand what formula should be used. » So by reading this, I am not sure if Chow formula might be appropriate to calculate sample size for BABE trials. I have just quickly read the article, so I may not be doing a proper analysis. Perhaps dlabes might clarify this, since he is the master that we all should thank for the amazing PowerTOST package I want to understand how to do it right, just this. 
Helmut Hero Vienna, Austria, 20170803 15:37 @ DavidManteigas Posting: # 17653 Views: 1,560 

Hi David & Darya, » For some reason, I can't see the image with the formula that you put on the first post. I uploaded a copy. Should be visible by now. » » The only thing about what I'm not sure is CV. Because in formula that I used this is intrasubject variability, but in PowerTOST() this is coefficient of variation as ratio. In calculations in both cases I used СV  0.3. » » Within subject standard deviation and within subject CV are different parameters. Nevertheless, I think that this is not the only reason for such a big difference. Chow used the standard deviation, whilst in Power.TOST the CV (ratio, not in %!) is used. However, no big difference since CV = √ℯ^{s²} – 1 and the other way ’round s = √log(CV² + 1). In Power.TOST for convenience you can use se2CV(foo) for the former and CV2se(foo) for the latter.» There is a sentence in one of the articles that is quoted on SampleNTOST formula that may clarify this issue: […] » So by reading this, I am not sure if Chow formula might be appropriate to calculate sample size for BABE trials. I think it’s crap. The formula Darya posted (10.2.6) of p.259 of the book gives the impression that n is the total sample size. The text continues with: Since the above equations do not have an explicit solution, for convenience, for a 2 × 2 crossover design, the total sample size needed to achieve a power of 80% or 90% at 5% level of significance with various combinations of ε and δ is given in Table 10.2.1. However, in the example which follows on p.260 we read:By referring to Table 10.2.1, a total of 24 subjects per sequence is needed in order to achieve an 80% power at the 5% level of significance. (my emphases)» […] Perhaps dlabes might clarify this, since he is the master that we all should thank for the amazing PowerTOST package Detlew is on vacation. Some clarifications: Zhang^{1} Using the formulas of Zhang or Chow & Liu, you get the sample size / sequence. To obtain the total, multiply by 2, which is already done it the righthand side of Hauschke’s formulas. Comparison with the references: Table 2, untransformed data, 90% Power, Δ 0.2, σ 0.3, ∕θ 0.05 (p.537^{1}): 37 / sequence
Table 5.4.1, untransformed data, 80% Power, Δ 0.2μ_{R}, α 0.05 (p.158^{3}): 52
Table 5.1, logtransformed data, 80% Power, (θ_{1}, 1∕θ_{1}) = (0.80, 1.25), θ 0.95, α 0.05 (p.113^{2}): 40
Stop using the formula given by Chow, Shao, Wang! Sample sizes are way to low – which compromises power. If you used it in the past for 80% power – if all assumptions (CV, θ_{0}) turned out to be “correct” – substantially more than 20% of studies should have failed. If not, you were lucky!
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
daryazyatina Junior Ukraine, 20170803 15:52 @ Helmut Posting: # 17654 Views: 1,535 

Hi, Helmut » Stop using the formula given by Chow, Shao, Wang! Sample sizes are way to low – which compromises power. If you used it in the past for 80% power – if all assumptions (CV, θ_{0}) turned out to be “correct” – substantially more than 20% of studies should have failed. If not, you were lucky! Thank you for such a comprehensive answer. It's good that now we will know about this. 
DavidManteigas Regular Portugal, 20170803 16:12 @ Helmut Posting: # 17657 Views: 1,525 

Thank you for the clarification Helmut, very helpful as always. In the example you've mentioned on page 260 there is also another mistake, since z0.10 is 1.28 and not 0.84, so according to the reduced formula, 28 subjects per sequence would be necessary and not the reported 21. Still, significantly lower than the sample obtained with sampleNTOST. 
Helmut Hero Vienna, Austria, 20170803 16:33 @ DavidManteigas Posting: # 17658 Views: 1,531 

Hi David, » […] on page 260 there is also another mistake, since z0.10 is 1.28 and not 0.84, so according to the reduced formula, 28 subjects per sequence would be necessary and not the reported 21. Yep, I noticed that as well. BTW, z_{0.05} is 1.64 and not 1.96. In all its “beauty”:
Of course, PowerTOST contains the large sample approximation based on z as well. Its an internal (and hence, undocumented) function.
The approximation by the noncentral tdistribution does an excellent job. That’s why we’ve set it as the default in package Power2Stage for speed reasons (~40 times faster than Owen’s Q).The shifted central t is also good. Only in a few cases higher sample sizes. Conservative, no worries. The large sample size approximation sucks. Always lower sample sizes than with the other methods, power compromised – unless one dares to submit a study evaluated by z. Quoting from a conversation with an eminent regulator of the Iberian Peninsula: “Frankly between z and t methods the difference is ridiculous when variability is not large and later a few subjects is added to compensate dropouts. I do not see any problem in using zmethod. I use it because it is very straightforward in Excel and there is no need to have special software.” Well, cough…Braindead. 38.9 ℃ and rising, no AC in my office… — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20170816 16:01 @ Helmut Posting: # 17694 Views: 1,036 

Hi all! In addition to all what was said up to now: Don’t use the Book by Chow, Shao, Wang! It's full of errors, full of a terminology not compatible to other publications so that one has no chance to compare. (F.i. what are epsilon, delta, sigma1.1 compared to our use of theta0, upper or lower BE limit, intrasubject CV. No one knows! To cite our ol'Sailor "garbage in, garbage out, It's that simpel.") — Regards, Detlew 
Helmut Hero Vienna, Austria, 20170803 16:04 @ daryazyatina Posting: # 17655 Views: 1,530 

Hi Darya, » The sample size by the formula I used: » n=28 To keep everything equal you should convert the standard deviation to the CV. Hence,
Of course higher than the 28 by the dubious formula. As David mentioned above (quoting Zhang’s paper) power is compromised. Let’s check how much:
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
daryazyatina Junior Ukraine, 20170804 08:15 @ Helmut Posting: # 17663 Views: 1,438 

Hi Helmut, » To keep everything equal you should convert the standard deviation to the CV. Hence, » se2CV(0.3) You are wrote about converting the standard deviation to СV, but use the formula to convert the standard error to СV. I thought that the standard deviation and standard error are different parameters. Maybe I'm wrong. 
Helmut Hero Vienna, Austria, 20170804 12:41 @ daryazyatina Posting: # 17664 Views: 1,403 

Hi Darya, » You are wrote about converting the standard deviation to СV, but use the formula to convert the standard error to СV. I thought that the standard deviation and standard error are different parameters. Like in this thread it is a question of terminology; see this thread for details. In PowerTOST : ?se2CV to get the formulas for conversion. If you want a second opinion:^{1}Nitpick: Chaos everywhere. We have only s_{w} – which is the estimate of the unknown σ_{w}… BTW, when it comes to referencescaling, both the EMA and the FDA correctly use s_{wR}. Since scaling is based on the unknown parameters, we might observe an inflation of the type I error.^{2}
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
daryazyatina Junior Ukraine, 20170816 10:12 @ Helmut Posting: # 17689 Views: 1,063 

Hi Helmut » Like in this thread it is a question of terminology; see this thread for details. I have some questions about this. Some articles describe the difference between a standard deviation and a standard error. How to understand such articles? for example The terms “standard error” and “standard deviation” are often confused. The contrast between these two terms reflects the important distinction between data description and inference, one that all researchers should appreciate.^{1} 1. Douglas G Altman, J Martin Bland. Standard deviations and standard errors doi:10.1136/bmj.331.7521.903 2. David L Streiner, PhD. Maintaining Standards: Differences between the Standard Deviation and Standard Error, and When to Use Each http://ww1.cpaapc.org/Publications/Archives/PDF/1996/Oct/strein2.pdf 
Helmut Hero Vienna, Austria, 20170816 18:54 @ daryazyatina Posting: # 17695 Views: 1,028 

Hi Darya, » Some articles describe the difference between a standard deviation and a standard error. THX for the second one! Was great fun to read. » How to understand such articles? Duno. Perfect description. Read them again. — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 