ElMaestro ★★★ Denmark, 2009-05-05 21:49 (5835 d 16:43 ago) Posting: # 3649 Views: 11,013 |
|
Dear all, I am still wondering about the Danish policy for BE, so I got curious and wanted to see some power curves for 2,2,2-BE studies when 1.0 must be part of the 90% CI. The data I have are based on a brute force metho, i.e. the calculated power asymptotically approaches the true power as the number of iterations/resmaples approach infinity. So, here an example, using R lingo, where N is the number of sbj in each sequence, Pwr1 is power according to the Danish requirements, and Pwr2 is power with the usual requirements. CV was 28%, R/T was 95%, 120000 resamples used: N=c(16, 20, 24, 28, 32, 36, 40, 44, 48, 50, 60, 70, 80, 90, 100, 110) Pwr1=c(0.712, 0.751, 0.754, 0.738, 0.718, 0.696, 0.677, 0.654, 0.637, 0.629, 0.581, 0.535, 0.493, 0.451, 0.415, 0.38) Pwr0=c(0.772, 0.856, 0.911, 0.946, 0.967, 0.98, 0.989, 0.993, 0.996, 0.997, 0.999, 1, 1, 1, 1, 1) Example how to read: If we have 16 sbj in each sequence then the power with the standard requirements is approximately 77.2% (Fartssie gives 77.6% on my machine); with the Danish requirements it is approximately 71.2%. These data are rather shocking to me, provided they are true. The power has a maximum in the early 20'ies at about 75%. So if a company believes their product has a CV of 28% and T/R=95% then there is no way of powering the study to 80%. So my questions: Are these data correct - could somemone with more specialised software check those values? I think the assumed CV and R/T values are reasonable, agreed? Did anyone ever understand the Danish thinking? Talk to a regulator there perhaps? Best regards EM. *: What's the real power at CV=65%, R/T=95%, 38 sbj (19 in each seq)?? |
martin ★★ Austria, 2009-05-05 23:32 (5835 d 15:01 ago) @ ElMaestro Posting: # 3650 Views: 9,310 |
|
Dear EM! a larger sample size will result in a narrower confidence interval for the unknown population parameter (rule of thumb: quadruple the sample size will double precision). you choose R/T=95% and not R/T=100% as expected population parameter for your simulations and the narrower confidence intervals (for the expected true ratio of R/T=95%) for larger sample sizes will give you the results observed in your simulation study. just use R/T=100% as population parameter (i.e. assuming perfect BE ![]() hope this helps martin |
ElMaestro ★★★ Denmark, 2009-05-05 23:48 (5835 d 14:45 ago) @ martin Posting: # 3651 Views: 9,346 |
|
Dear Martin, I think you answered a question I did not ask. No worries, this has happened before. If we ask ourselves at which T/R the odd Danish requirement has the least impact (as compared to the standard criteria) then I would expect this to be at T/R = 1.0 (the chance of CI's traversing 1.0 highest for stochastic reasons), which I guess is what you also expressed. Best regards, EM. |
martin ★★ Austria, 2009-05-06 00:59 (5835 d 13:33 ago) @ ElMaestro Posting: # 3652 Views: 9,275 |
|
Dear EM ! what I tried to explain is that your “shocking data” are due to the believe/assumption regarding the true R/T ratio. in the case of R/T=1, power based on the danish requirement increases as sample size increases but still requiring a larger sample size (IMHO) compared to the “usual requirement” for a given power. A 2nd attempt to answer your question: yes; on the assumption of R/T=0.95 it will be rather difficult to find a sample size for a power of 80% (power does not monotonically increase with sample size) whereas on the assumption of R/T=1 you will find a sample size to show BE with a power of at least 80%. best regards martin |
d_labes ★★★ Berlin, Germany, 2009-05-08 11:49 (5833 d 02:44 ago) @ ElMaestro Posting: # 3659 Views: 9,276 |
|
Dear ElMaestro, very interesting! ❝ So my questions: Are these data correct - could some one with more ❝ specialised software check those values? I have not recalculated your values, but I have the very strong feeling they are correct. What your data show: The higher your N the higher the chance of failing the danish BE criterion if the variability is low enough. This is reasonable to me because with higher N the confidence interval gets tighter (as Martin has already stated) and therefore the 1.0 is not contained in the CI if your point estimate of the BE ratio is distinct enough from 1.0. ❝ Did anyone ever understand the Danish thinking? No, never. ![]() They have changed the usually accepted BE test from "The BE ratio (population) is allowed to vary between 0.80 and 1.25" by act of law to the test "The ratio (population) must be not distinct from 1.0". This results in your "strange" power numbers. Suppose the BE metric varies only to a very low extent (zero as a limit, but then we statisticians become unemployed ![]() IMHO this is not the correct answer to the Bioequivalence question. But let me restate: "How unsearchable are Regulator's judgments and how inscrutable Regulator's ways! Amen." (Romans 11:33, 36) — Regards, Detlew |
d_labes ★★★ Berlin, Germany, 2009-05-08 17:53 (5832 d 20:40 ago) @ d_labes Posting: # 3661 Views: 9,264 |
|
Dear ElMaestro, PS: Meanwhile I have tried a simulation with "The power to knoff". Here my results using CV=28% T/R=95%:
Not the exactly the same numbers as yours, but the same trend. BTW: How long did you wait for your 120 000 simulations each? Are you got boring? ![]() — Regards, Detlew |
ElMaestro ★★★ Denmark, 2009-05-08 22:19 (5832 d 16:14 ago) @ d_labes Posting: # 3667 Views: 9,233 |
|
Hi dlabes, ❝ Not the exactly the same numbers as yours, but the same trend. ❝ BTW: How long did you wait for your 120 000 simulations each? As this a simulation study I think the numbers are VERY similar. Or did I overlook sumfin? On my machine it takes 4.7 secs to do 120000 simulations. I bet I can get it working at a much faster speed once (if!) I optimise my code, see below. My machine is a laptop with Celeron M 1.5 GHz (yes, they knew how to make those laptops back then). I used MinGW version 1.4.3 interfaced by DevC++ 4.9.9.2 with optimisations for x86 turned on. Best regards EM. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2009-05-08 15:23 (5832 d 23:09 ago) @ ElMaestro Posting: # 3660 Views: 9,285 |
|
Dear ElMaestro! ❝ I am still wondering about the Danish policy for BE [...] You are not alone. ![]() ❝ *: What's the real power at CV=65%, R/T=95%, 38 sbj (19 in each seq)?? Good question. I think we are pushing software to the limits, i.e., running into troubles of getting a rasonable value of the noncentral t-distribution (numeric precision,...). Fartssie comes up with -0.0278 (!), StudySize 2.01 simply gives up (-, with 20 subjects/sequence gives 0.421%), and my R-code a <- 0.05 # alpha gets stuck at 40 subjects (power 0.454%)...The nasty point in the Danish requirement are formulations with low CVs. The current guideline states "The clinical and analytical standards imposed may also influence the statistically determined number of subjects. However, generally the minimum number of subjects should be not smaller than 12 unless justified." whereas the BE-draft comes up with"The minimum number of subjects in a cross-over study should be 12."
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2009-05-08 18:06 (5832 d 20:26 ago) @ Helmut Posting: # 3662 Views: 9,512 |
|
Dear Helmut, dear ElMaestro! ❝ ❝ *: What's the real power at CV=65%, R/T=95%, 38 sbj (19 in each seq)?? ❝ ❝ Good question. I think we are pushing software to the limits, i.e., ❝ running into troubles of getting a rasonable value of the noncentral ❝ t-distribution (numeric precision,...). ❝ Fartssie comes up with -0.0278 (!), StudySize 2.01 simply gives up (-), ❝ and my R-code [...] ❝ gets stuck at 40 subjects (power 0.454%)... My Extreme computing SASophylistic gives Power = 4.50940122 % As you know precise up to the last digit ![]() Simulated with 8000 resamplings I come up with 4.6% (normal BE test). — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2009-05-08 18:15 (5832 d 20:18 ago) @ d_labes Posting: # 3663 Views: 9,270 |
|
Dear D. Labes! ❝ My Extrem computing SASophylistic gives ❝ Power = 4.50940122 % ❝ As you know precise up to the last digit Oh wow, what a beasty number cruncher you have! — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2009-05-08 21:40 (5832 d 16:53 ago) (edited on 2009-05-09 10:13) @ Helmut Posting: # 3666 Views: 9,189 |
|
Hi HS, ❝ Good question. I think we are pushing software to the limits, i.e., ❝ running into troubles of getting a rasonable value of the noncentral ❝ t-distribution (numeric precision,...). ❝ Fartssie comes up with -0.0278 (!), StudySize 2.01 simply gives up (-, ❝ with 20 subjects/sequence gives 0.421%), and (blah blah blah) I actualy meant this as a trick question. As you correctly point out this is, even with advanced and trusted software, not an exact science. While mathematicians may say that a problem (such as power in a BE study) has this and that exact solution and be pointing at some ridiculously complex integrals, many of such problems are solved numerically today. Integration is a typical example, it requires some Al Gore Rhythms which are heavily parameterised. Robustness is all about finding the set of parameters that give a consistent answer, but we can sometimes find condititions where the algorithm fails or miscalculates. The figures above illustrate it. Thus we might ask further: Under which conditions will Software X give me an answer that is 5% wrong? That is one helluva difficult question to answer when exact solutions are not available to compare with. EM. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2009-05-08 18:56 (5832 d 19:37 ago) @ ElMaestro Posting: # 3664 Views: 9,203 |
|
Dear ElMaestro! ❝ N=c(16, 20, 24, 28, 32, 36, 40, 44, 48, 50, 60, 70, 80, 90, 100, 110) ❝ Pwr1=c(0.712, 0.751, 0.754, 0.738, 0.718, 0.696, 0.677, 0.654, 0.637, ❝ 0.629, 0.581, 0.535, 0.493, 0.451, 0.415, 0.38) ❝ Pwr0=c(0.772, 0.856, 0.911, 0.946, 0.967, 0.98, 0.989, 0.993, 0.996, ❝ 0.997, 0.999, 1, 1, 1, 1, 1) ❝ ❝ Example how to read: If we have 16 sbj in each sequence then the power ❝ with the standard requirements is approximately 77.2% ❝ (Fartssie gives 77.6% ❝ on my machine); with the Danish requirements it is approximately 71.2%. I get 77.62276% (N=n1+n2=32) in How did you set limits for the Danish requirements? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2009-05-08 21:17 (5832 d 17:16 ago) (edited on 2009-05-09 10:51) @ Helmut Posting: # 3665 Views: 9,200 |
|
Hi HS ❝ I get 77.62276% (N=n1+n2=32) in Good, seems I have good agreement with your software, I am happy to see that. ❝ How did you set limits for the Danish requirements? I am not sure I understand what you mean when asking how I set the limits for the Danish reqs. I evaluate BE, then I use a further requirement which goes like this: If upper bound < 1.0 then it is a failure, and if lower bound > then also fail. In C lingo:
if (FailWhen1NotPartOfCI) Where OK is just my private own little boolean (actually in C it is an int, but that's another story) to indicate if a dataset if accepted or rejected. But right now I have a feeling this is not really what you meant? Best regards. EM PS: If there are some geekolophystic souls out there to whom it looks like my code has not been optimised for speed (or who would like to tell me log(1)=0) then yes, you are right, there's still work to be done ![]() |