Bryony Simmons ☆ UK, 2018-02-01 13:16 (2642 d 09:34 ago) Posting: # 18320 Views: 23,701 |
|
Hi, I am in the process of conducting a sample size calculation for a crossover BE study. There are four-treatments (2xreference dosages & 2xtreatment dosages) & I plan to use the Williams' design as follows: ABCD BDAC CADB DCBA The coefficient of variation from previous studies is 20% & I am assuming the true test reference ratio to be between 0.95 and 1.05. I want to demonstrate bioequivalence (0.80-1.25) at 90% power at the 5% level. Using these figures, if it was a standard AB/BA crossover, I estimate that I would require 12 individuals to complete each arm - is this correct? I am unsure how this calculation is extended to fit the 4x4 design - do I simply randomise 12 more individuals to each of the remaining two sequences? Further, if I planned to have just 1 treatment dose changing the design as follows: ABC BCA CAB Is it reasonable to randomise 12 individuals per sequence? I really appreciate any help on this question. Best wishes, Bryony Edit: Category changed; see also this post #1. [Helmut] |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-01 14:01 (2642 d 08:48 ago) @ Bryony Simmons Posting: # 18321 Views: 22,225 |
|
Hi Bryony, ❝ […]There are four-treatments (2xreference dosages & 2xtreatment dosages) & I plan to use the Williams' design… ❝ The coefficient of variation from previous studies is 20% & I am assuming the true test reference ratio to be between 0.95 and 1.05. I want to demonstrate bioequivalence (0.80-1.25) at 90% power at the 5% level. I recommend the package PowerTOST for R (open source & free of costs). ![]() Your values
n ist the total sample size (hence, six per sequence).❝ […] if it was a standard AB/BA crossover, I estimate that I would require 12 individuals to complete each arm - is this correct? I am unsure how this calculation is extended to fit the 4x4 design - do I simply randomise 12 more individuals to each of the remaining two sequences? Not quite so. In a 2×2 crossover we have n–2 degrees of freedom and in a 4×4 we have 3n–6. Sample size estimation is an iterative process (sample size is increased until at least the desired power is reached). In this process the degrees of freedom are important. Hence, multiplying the sample size for a 2×2 is not correct. ❝ Further, if I planned to have just 1 treatment dose changing the design as follows: ❝ ABC ❝ BCA ❝ CAB ❝ Is it reasonable to randomise 12 individuals per sequence? No:
Be aware that simultaneous comparisons inflate the Type I Error (the patient’s risk). I guess that in your first design you will not compare the two test treatments but only T1 vs. R1, T1 vs. R2, T2 vs. R1, and T2 vs. R2. With these k=4 comparisons (each performed at the nominal α 0.05) the Familywise (Type I) Error Rate will be 1–(1–α)k≤18.55%. Bonferroni’s adjusted α will be α/k or 0.0125 which translates into a 100(1–2α/k) or 97.5% two-sided confidence interval whilst keeping the FWER with 1–(1–α/k)k≤4.91% below the nominal α 0.05. The lower α will substantially increase the sample size: 24 → 36
BTW, in your first design you opted for a Williams’ design ABCD ABCD ABC ABC May I ask why? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-01 15:01 (2642 d 07:49 ago) @ Helmut Posting: # 18322 Views: 22,417 |
|
Dear Helmut, ❝ Be aware that simultaneous comparisons inflate the Type I Error (the patient’s risk). I guess that in your first design you will not compare the two test treatments but only T1 vs. R1, T1 vs. R2, T2 vs. R1, and T2 vs. R2. With these k=4 comparisons (each performed at the nominal α 0.05) the Familywise (Type I) Error Rate will be 1–(1–α)k≤18.55%. Bonferroni’s adjusted α will be α/k or 0.0125 which translates into a 100(1–2α/k) or 97.5% two-sided confidence interval whilst keeping the FWER with 1–(1–α/k)k≤4.91% below the nominal α 0.05. The lower α will substantially increase the sample size ... Also you may be correct I must confess that I seldom or never had done this alpha-adjustment in case I used a higher order design. Reasons:
— Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-01 16:48 (2642 d 06:02 ago) @ d_labes Posting: # 18323 Views: 22,321 |
|
Dear Detlew, ❝ Also you may be correct I must confess that I seldom or never had done this alpha-adjustment in case I used a higher order design. Reasons: ❝ • Never had problems with using un-adjusted alpha I faced two.
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
nobody nothing 2018-02-01 18:10 (2642 d 04:40 ago) @ Helmut Posting: # 18324 Views: 22,109 |
|
Case 2: Did you start a discussion with the authority? — Kindest regards, nobody |
d_labes ★★★ Berlin, Germany, 2018-02-01 19:57 (2642 d 02:53 ago) @ Helmut Posting: # 18326 Views: 22,039 |
|
Dear Helmut, Point 1.: IMHO the deficiency is not correct. To show dose linearity the equivalence test for both dose levels have to be fullfilled. Combination of both tests with AND. That's similar to the BE decision for AUC and Cmax. Both have to be a positive outcome. Otherwise dose linearity would be rejected. No alpha-adjustment at all. As always stated by me in statistical analysis plans for that cases under the heading "Multiplicity". Point 2. In the light of the combination of two BE statements (tablet vs. reference or capsule vs. reference) this is combination with OR. If, and only if a global statement is here appropriate. IMHO here both hypotheses are standing for it's own. As a friend of mine told me quite recently: Since only tablets or capsules came to market, only for that formulation a patient risk exists ![]() This case is what my former company's boss always told the sponsors: Be aware that the regulatory bodies may ask you for 95% CIs as BE test if you submit "pilot" studies in which one of the two used Test formulations showed BE and the other not. Regulators acted different. Sometimes accepted the study without deficiencies, sometimes asked really for 95% CIs. Fortunately in all cases I remember 90% CI's and 95% CI's gave the same answer: BE proven. That's because such cases, pilot BE studies with sample sizes around 12 with positive outcome, only occure for very small variabilities, CV around 10%. Thus in contrast to nobody it would be reasonable to start a discussion with the authority concerning your point 1. But I know of course that discussing with regulatory authorities is NIL. Answer their questions without question their questions. — Regards, Detlew |
Relaxation ★ Germany, 2018-02-02 12:12 (2641 d 10:38 ago) @ d_labes Posting: # 18331 Views: 22,068 |
|
Dear All. Ignoring the risk that I may got it totally wrong, but from my experience (as a non-statistician ![]() ❝ Point 2. In the light of the combination of two BE statements (tablet vs. reference or capsule vs. reference) this is combination with OR. If, and only if a global statement is here appropriate. IMHO here both hypotheses are standing for it's own. In current discussions on this topic the general opinion around here is quite clear: testing more than one option vs. Reference is multiple testing because you have "more than one shot" → adjust the ALPHA. On the other hand, in reality more often than not, sponsors will then simply conduct two studies. I was never brave enough to comment, that this would be "more than one shoot", too, and we should adjust the ALPHA in each study. ![]() But seriously, why should testing alternatives in one study require a punishment and testing in seperate studies not. Because I pay my fee in doubled overhead study costs? Or because I don't have to tell anybody of the "other study" during application ![]() Best regards, Relaxation. |
nobody nothing 2018-02-02 13:49 (2641 d 09:01 ago) @ Relaxation Posting: # 18332 Views: 21,984 |
|
This alpha-adjustment = get punished for touching the same data (!) twice has a bit the touch of quantum physics. You will accept it, but it's not real intuitive, in the world we (macroscopically) live in... — Kindest regards, nobody |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-02 17:14 (2641 d 05:36 ago) @ Relaxation Posting: # 18333 Views: 22,051 |
|
Hi Relaxation, ❝ But seriously, why should testing alternatives in one study require a punishment and testing in seperate studies not. Because I pay my fee in doubled overhead study costs? Or because I don't have to tell anybody of the "other study" during application IMHO, you have to submit the study’s synopsis. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Relaxation ★ Germany, 2018-02-02 20:41 (2641 d 02:08 ago) @ Helmut Posting: # 18334 Views: 22,262 |
|
❝ ❝ But seriously, why should testing alternatives in one study require a punishment and testing in seperate studies not. Because I pay my fee in doubled overhead study costs? Or because I don't have to tell anybody of the "other study" during application ❝ ❝ IMHO, you have to submit the study’s synopsis. Hello Helmut. That was what I thought myself (and would also argue in favour of by gut feeling). But at least in the EU I think we are asked to submit these for the pilot studies conducted with the formulation you apply for. As far as I remember, it is also asked for synopses from other studies during formulation development, but my guess would be that there might be a grey area in the discussion, whether a study conducted with Alternative 1 would be part of the development of Alternative 2. I personally would argue in favour of "full disclosure", but I generally ![]() P.S.: I looked it up in the BE-GL: all relevant studies ... comparing the formulation applied for (i.e. same composition and manufacturing process) with a reference medicinal product marketed in the EU. Would apply to the "tablet vs. reference and capsule vs. reference" example brough up earlier, wouldn't it? Two shots. Best regards and a nice weekend to everybody, Relaxation. |
Astea ★★ Russia, 2018-02-02 22:45 (2641 d 00:05 ago) @ Relaxation Posting: # 18335 Views: 22,014 |
|
Dear Smart People! I am suffering trying to understand the problems and perspectives of alpha-adjustment. Wherever I look I see TIE inflation... For HVD:
TIE may be also inflated by evaluating different methods for PK metrics (when one of them turns to be overpowered - as in the case of adaptive design or different CI for two Cmax and AUC). I suspect TIE will be also inflated in studies of NTDs also (by using FDA approach or by using different confidence limits for two metrics). Now you talk about higher-order design and dose-proportional studies... So... Are there any ways to deal with it excepting the bright idea of iteratively adjusted alpha? While looking through the literature I've found only some suggestions of modificating ABE (with new procedures or nonlinear CI limits) or alternatives for ABE (like GSD or Two-stage designs)...
— "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-03 00:39 (2640 d 22:11 ago) @ Astea Posting: # 18336 Views: 21,947 |
|
Hi Astea, ❝ I am suffering trying to understand the problems and perspectives of alpha-adjustment. You are not alone. ![]() ❝ Wherever I look I see TIE inflation... ❝ For HVD: ❝ – FDA RSABE seemed to have problems for CV around 30% Only ≤30% (see there). ❝ For well-known adaptive designs (Potvin, Xu...) TIE also behaves badly... For most of them, not at all. Ones with a slight (!) inflation can be handled be more adjustment. Xu is fine. ❝ TIE may be also inflated by evaluating different methods for PK metrics (when one of them turns to be overpowered - as in the case of adaptive design or different CI for two Cmax and AUC). Theoretically IUT should protect us. Benjamin and Detlew are working on a new function power.2TOST.sim() and updated sampleN.2TOST() in PowerTOST . As expected the overall TIE is much lower than the single ones.❝ I suspect TIE will be also inflated in studies of NTDs also (by using FDA approach or by using different confidence limits for two metrics). Might well be. ❝ Now you talk about higher-order design and dose-proportional studies... That’s an endless story. ❝ So... Are there any ways to deal with it excepting the bright idea of iteratively adjusted alpha? THX for calling it bright! For reference-scaling a new method* by the two Lászlós control the TIE much better than RSABE/ABEL – but only for full replicate designs. Yet another good reason to avoid the bloody partial replicate. Detlew already implemented a new function power.RSABE2L.sdsims() in PowerTOST . You need the development version on GitHub to run this code:library(PowerTOST) ❝ While looking through the literature I've found only some suggestions of modificating ABE (with new procedures or nonlinear CI limits) or alternatives for ABE (like GSD or Two-stage designs)... ❝
One of the authors of [1] is user Ben in the Forum and a co-author of PowerTOST . Incidentally I reviewed both papers. Demanding but great fun at the end. ![]()
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-04 13:40 (2639 d 09:10 ago) @ Helmut Posting: # 18340 Views: 21,893 |
|
Dear Astea, dear Helmut, ❝ ❝ I suspect TIE will be also inflated in studies of NTDs also (by using FDA approach or by using different confidence limits for two metrics). ❝ ❝ Might well be. There is no reason to speculate. Fire up the all-in-one device suitable for every purpose PowerTOST ![]() and use function power.NTIDFDA() .Try this (for the homoscedastic case swR=swT): library(PowerTOST) With n=12 we get: CV GMR TIE But if we use n obtained to have a targetpower of 0.8, using theta0=0.975 we get:
0.03 214 1.032106 0.051258 Indeed a small alpha-inflation at low CVs'. If we were Potvin-adepts this is a negligible alpha inflation, namely below 0.052. — Regards, Detlew |
Astea ★★ Russia, 2018-02-04 21:04 (2639 d 01:46 ago) @ d_labes Posting: # 18342 Views: 21,664 |
|
Dear Helmut! Thank you for the rectification! Till you and Detlew care about it, the world can breathe calmly! Dear Detlew! And what about the different CI for two metrics? Can you invent smthg like Power2.RSABE or Power2.NTIDFDA or it is a stupid idea? My thoughts are as follows: according to some product specific EMA guideline (sirolimus for example) we should shorten the limit only for AUC but not for Cmax. May it leed to TIE inflation or not? For an extreme example, suppose we calculate sample size, basing on CV 25% (I understand that NTID should not have large variance but nevertherless): sampleN.TOST(CV=0.25, theta0=0.975, theta1=0.9, theta2=1.11, design="2x2") 122 volunteers! Then by using Power.2TOST for CV=0.3 I get TIE very slightly upper than 0.05 power.2TOST(CV=c(0.3,0.25), n=122, theta1=c(0.8, 0.9), theta2=c(1.25, 1.11), theta0=c(1, 1.11), rho=0) In this case the diference from 0.05 is negligible, but may be one could find more rude example? Or am I using or interpretate power.2TOST uncorrectly?❝ Try this (for the homoscedastic case swR=swT): — "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-05 02:01 (2638 d 20:49 ago) @ Astea Posting: # 18343 Views: 22,041 |
|
Hi Astea, ❝ My thoughts are as follows: according to some product specific EMA guideline (sirolimus for example) we should shorten the limit only for AUC but not for Cmax. May it leed to TIE inflation or not? ❝ ❝ Then by using Power.2TOST for CV=0.3 I get TIE very slightly upper than 0.05 ❝ ❝ [1] 0.05000002 ❝ In this case the diference from 0.05 is negligible, but may be one could find more rude example? Or am I using or interpretate You are using it correctly but this function underwent a minor revision. In the development version 1.4.6.9000 you would get [1] 0.05003 or better with nsims=1e6 for the TIE:[1] 0.050059 IMHO, AUC and Cmax generally are highly correlated. power.2TOST(CV=c(0.3, 0.25), n=122, theta1=c(0.8, 0.9), ❝ ❝ Try this (for the homoscedastic case swR=swT): ❝ And what to do if variances are different (non-negotiable word heteroscedasticity)? Extending Detlew’s code for swT=2swR: library(PowerTOST) For swT=swR: swT/swR: 1 And for swT=½swR: swT/swR: 0.5 Apart from the CVs there is a dependency of the TIE on the sample size (common to the FDA’s scaling methods). If we force the minimum sample size to 24 dosed subjects (required by the FDA for RSABE) we get with min.n <- TRUE :swT/swR: 1 Hence, the TIE gets larger for studies forced to n 24 as compared to the ones designed for the target power. Edit: Corrected code there. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-05 17:40 (2638 d 05:10 ago) @ Helmut Posting: # 18354 Views: 21,613 |
|
Dear Helmut, thank you for that piece of work beyond the call of duty. Only one nitpicking: I think the entries under signif p<0.05 should read p>0.05 .Edit: Congratulations for post #18,000. [Helmut] — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-05 18:49 (2638 d 04:00 ago) @ d_labes Posting: # 18355 Views: 21,706 |
|
Dear Detlew, ❝ Only one nitpicking: ❝ ❝ I think the entries under ❝ Hhm, why? I’m testing at the 5% level for a significant inflation. The limit for 106 sim’s is binom.test(0.05*1e6, 1e6, alternative="less")$conf.int[2]
Hence, any TIE > this value should be significant. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-05 23:17 (2637 d 23:33 ago) @ Helmut Posting: # 18359 Views: 21,549 |
|
Dear Helmut, ❝ ❝ Only one nitpicking: ❝ ❝ ❝ ❝ I think the entries under ❝ ❝ ❝ ❝ Hhm, why? I’m testing at the 5% level for a significant inflation. The limit for 106 sim’s is ❝ ❝ [1] 0.05035995 ❝ Hence, any TIE > this value should be significant. That's totally correct. But the entry suggest for me that the TIE (with values >0.05) is significant <0.05. Maybe it's my misunderstanding. To avoid this many statisticians state "significant at alpha=0.05". But then the Hypotheses have to be given else. — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-06 13:34 (2637 d 09:16 ago) @ d_labes Posting: # 18363 Views: 21,721 |
|
Dear Detlew, ❝ […] the entry suggest for me that the TIE (with values >0.05) is significant <0.05. Maybe it's my misunderstanding. To avoid this many statisticians state "significant at alpha=0.05". But then the Hypotheses have to be given else. I see! Change lines 17 and 32 from res <- cbind(CVs, n=NA, forced=FALSE, GMR=NA, TIE=NA, to res <- cbind(CVs, n=NA, forced=FALSE, GMR=NA, TIE=NA, Will give swT/swR: 2 — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-05 17:35 (2638 d 05:14 ago) (edited on 2018-02-05 22:04) @ Astea Posting: # 18353 Views: 21,755 |
|
Dear Astea! ❝ And what about the different CI for two metrics? Can you invent smthg like Power2.RSABE or Power2.NTIDFDA or it is a stupid idea? If time allows. But pensioners never have time ![]() From theory the intersection-union principle should protect us from alpha-inflation, as long as the single tests to be combined also have size 0.05. Since this isn't the case for RSABE for NTIDs, as Helmut has shown in his post, an alpha inflation has to be supposed. Thus power2.NTIDFDA() , or whatever it would be called, only had the purpose to show with numbers that the TIE is not <=0.05, as ist should be. Educational purpose only. Duno if all the effort and labour to implement sumfink like power2.NTIDFDA() is worth that aim.❝ For an extreme example, suppose we calculate sample size, basing on CV 25% (I understand that NTID should not have large variance but nevertherless): ❝ 122 volunteers! This sample size estimation is for the EMA recommendation to deal with NTIDs, namely to shrinken the BE acceptance range to 0.9 ... 1.1111... The FDA reference scabled ABE method uses the conventional limits 0.8 ... 1.25 from an CV ~ 0.214 on and higher. Below that the acceptance ranges are tightened, at swR=0.1 f.i. they are 0.9 ... 1.1111... This leads of course to different sample sizes. Use sampleN.NTIDFDA() instead.Moreover, the FDA method requires a replicate design in which both CVwT and CVwR are estimable. That is a full replicate "2x2x4" or a 3-period replicate "2x2x3" with sequences TRT | RTR. Although the latter is not mentioned in the Warfarine guidance. Thus your design used in the sample size estimation, "2x2", is not possible. ❝ Then by using Power.2TOST for CV=0.3 I get TIE very slightly upper than 0.05 ❝ ❝ [1] 0.05000002 What you here calculate is not the TIE. Finding the TIE having more than one test is much more complicated. There are other values for theta0 which also belong to the Null "bioinequivalence", f.i. theta0 = c(0.8, 1) or theta0 = c(1.25, 1) and so on and so on. Over all these sets the maximum (supremum) of power has to be determined to get the TIE. This is implemented in the function type1error.2TOST() .Be warned: The run-time of this function in the development version, after changing to simulations as basic concept, is horrible ![]() Hope this doesn't cause headache to you. Two or more TOSTs is for hardcore statisticians only. And they also have difficulties as I know (I'm not a statistician). — Regards, Detlew |
Astea ★★ Russia, 2018-02-05 18:52 (2638 d 03:57 ago) @ d_labes Posting: # 18356 Views: 21,649 |
|
Dear Helmut! Thank you for clarification! Dear Detlew! Sorry for mixing terms. In case of Power.2TOST I meant EMA approach (I've mentioned sirolimus guideline) for two metrics with diffirent but written in stone CI limits. I've turned to this case because I didn't find the way to "play with numbers" via power2.NTIDS cause it doesn't exist. I agree with you that it should have only educational purpose - to show slightly TIE inflation for NTIDs for completeness. That is low TIE inflation may be a common phenomenon. ❝ This is implemented in the function
type1error.2TOST(CV=c(0.3,0.25), n=122, theta1=c(0.8, 0.9), theta2=c(1.25, 1.11), rho=0.9, details = FALSE) ❝ Hope this doesn't cause headache to you. Two or more TOSTs is for hardcore statisticians only. And they also have difficulties as I know (no, no, I'm not a statistician, but others among the authors of P.S. 111.00 instead of logical 111.11 because it was written in the mentioned guideline. Increadible rounding methods! |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-05 19:10 (2638 d 03:40 ago) @ Astea Posting: # 18357 Views: 21,843 |
|
Hi Astea, ❝ I tried this: ❝ ❝ type1error.2TOST(CV=c(0.3,0.25), n=122, theta1=c(0.8, 0.9), theta2=c(1.25, 1.11), rho=0.9, details = FALSE) ❝ [1] 0.05000015 ❝ Warning: ❝ Result of maximization over nullset may not be reliable. ![]() ![]() In the new version with the argument details=TRUE showing the TIE for all eight intersection null sets:type1error.2TOST(CV=c(0.3, 0.25), n=122, theta1=c(0.8, 0.9), 1e+06 simulations. Time consumed (secs) ❝ The best statisticians I've ever met told me they are not actually statisticians. Neither am I... Welcome to the club! ❝ P.S. 111.00 instead of logical 111.11 because it was written in the mentioned guideline. Where did you find that? 90.00–111.11% for AUC0–t of sirolimus, AUC0–72 of tacrolimus and everolimus. ❝ Increadible rounding methods! I’m not willing to accept 111.11%. Heck, with an acceptable ∆ of 10% we get \(100(1-\Delta)^{-1}=111.1\dot{1}\) – nothing else! I will not eradicate my second-grade math only cause this number was proclaimed ex cathedra as truth by the omniscient almighty oracle. Had to swallow already rounding of the CI. Double rounding? Gimme a break! — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Astea ★★ Russia, 2018-02-05 20:27 (2638 d 02:23 ago) @ Helmut Posting: # 18358 Views: 21,665 |
|
Dear Helmut! ❝ Where did you find that? ❝ I’m not willing to accept 111.11%. Heck, with an acceptable ∆ of 10% we get \(100(1-\Delta)^{-1}=111.1\dot{1}\) – nothing else! I will not forget my second-grade math only cause it’s claimed ex cathedra by the oracle. Had to swallow already rounding of the CI. Double rounding? Gimme a break! ![]() Of course 111.00 is nonsense and should be forgotten. — "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2018-02-06 01:12 (2637 d 21:37 ago) @ Astea Posting: # 18361 Views: 21,592 |
|
Hi Astea, ❝ ❝ I’m not willing to accept 111.11%. Heck, with an acceptable ∆ of 10% we get \(100(1-\Delta)^{-1}=111.1\dot{1}\) – nothing else! I will not forget my second-grade math only cause it’s claimed ex cathedra by the oracle. Had to swallow already rounding of the CI. Double rounding? Gimme a break! ❝ Then why do we use 80,00? "Because in the former case the numbers look nicer and are easier to remember Yep, I’ve been there. ❝ Of course 111.00 is nonsense and should be forgotten. Like 111.11. Every idiot should see that \(\sqrt{0.9\times 0.9^{-1}}=\sqrt{0.9/0.9}=1\) which is not the fucking same as \(\sqrt{0.9\times 1.1111}=0.9999949999874999374996093722656\ldots\) In R-speak: identical(sqrt(0.9*0.9^-1), 1) Heck, we want the acceptance range in log-scale to be symmetrical around Zero and not –5·10–6. ![]() — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2018-02-05 23:33 (2637 d 23:16 ago) @ Astea Posting: # 18360 Views: 21,590 |
|
Dear Astea, ❝ I tried this: ❝ ❝ [1] 0.05000015 ❝ Warning: ❝ Result of maximization over nullset may not be reliable. ❝ This warning in the old version of that function is result of numerical difficulties in obtaining the supremum over the Nullsets. For an update see Helmut's post. — Regards, Detlew |