kms.srinivas Junior India, 20171226 07:39 Posting: # 18082 Views: 3,995 

Dear All, good morning I'm getting surprising results always that I'm taking sample size with 80% power and by coming to the results always 98 Power it is reaching....How it is possible always? Please let me know...thanq 
d_labes Hero Berlin, Germany, 20171226 12:08 @ kms.srinivas Posting: # 18083 Views: 3,706 

Dear kms.srinivas, please give us more details. — Regards, Detlew 
Helmut Hero Vienna, Austria, 20171226 12:22 @ kms.srinivas Posting: # 18084 Views: 3,715 

Hi kms, » I'm getting surprising results always that I'm taking sample size with 80% power and by coming to the results always 98 Power it is reaching....How it is possible always? If the irrelevant post hoc power is always 98%, I suspect a flaw in the software (or SAS macro). If it is always higher than the one used in sample size estimation it may be a miscalculation – ineradicably “popular” in Indian CROs (see this post). Revise your procedures. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
kms.srinivas Junior India, 20171226 13:00 @ Helmut Posting: # 18085 Views: 3,716 

» If the irrelevant post hoc power is always 98%, I suspect a flaw in the software (or SAS macro). » If it is always higher than the one used in sample size estimation it may be a miscalculation – ineradicably “popular” in Indian CROs (see this post). Revise your procedures I'm sorry, I mean to say always it is coming in the interval 95 to 99 and some times 100 ofcourse one time only, some times we are getting less power on 80s but very less times. Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] 
ElMaestro Hero Denmark, 20171226 14:11 @ kms.srinivas Posting: # 18087 Views: 3,688 

Hi kms.srinivas, I know this posthoc power business can be very tricky. Try and ask yourself which question posthoc power actually answers. Try and formulate it in a very specific sentence. Generally, [given a statistical model] power is the chance of showing BE in a trial if your expectations about GMR, CV are correct when you use N subjects. Choice of GMV, CV and N to plug in a power calculation is up to the user, but in the specific case of posthoc power stats software may make such a choice for you without asking. Often it is 95% or 100% regardless of what you observed in the previous trial. So with this in mind try and look back in your figures and tell which question the posthoc calculation in your case gave an answer to. Feel free to paste your numbers here then I am sure someone can help working it out. You might get surprised. — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 
Helmut Hero Vienna, Austria, 20171226 15:51 @ kms.srinivas Posting: # 18088 Views: 3,679 

Hi kms, » I mean to say always it is coming in the interval 95 to 99 and some times 100 ofcourse one time only, some times we are getting less power on 80s but very less times. It might be in the actual study that the CV is lower and/or the GMR closer to 1 than assumed. Power is especially sensitive to the GMR. We can also simulate studies and assess the outcome. The GMR follows the lognormal distribution and the variance the χ² distribution with n–2 degrees of freedom in a 2×2×2 crossover. Try this Rcode (the library PowerTOST will be installed if not available):
If you are interested in distributions of the GMR, CVs, and powers in the simulated studies, continue:
Explore also boxplots and a histograms:
Even if the study was planned for 90% power (like in the example) I strongly doubt that you will always get a post hoc power of 95–99%:
If the study was planned for 80% power it will be even less likely:
As ElMaestro suggested: Can you give us one example? We need the CV, the GMR, the number of eligible subjects, and your “power”. If the study was imbalanced, please give the number of subjects in each sequence. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
BEproff Senior Russia, 20171227 06:53 @ Helmut Posting: # 18091 Views: 3,528 

Hi Helmut, Is it worth to calculate sample size at power of 95%? Let's say I am a rich client with big wallet Are there any risks? 
ElMaestro Hero Denmark, 20171227 07:25 @ BEproff Posting: # 18093 Views: 3,507 

Hi BEproff, » Is it worth to calculate sample size at power of 95%? Calculation is relatively cheap. Execution is relatively expensive. — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 
Yura Regular Belarus, 20171227 08:23 @ BEproff Posting: # 18094 Views: 3,494 

Hi BEproff » Is it worth to calculate sample size at power of 95%? forced bioequivalence 
kms.srinivas Junior India, 20171227 09:11 @ Yura Posting: # 18095 Views: 3,466 

» forced bioequivalence Hi yura, what about regulatory queries on "unintentional forced Bioequivalence" 
Yura Regular Belarus, 20171227 10:07 @ kms.srinivas Posting: # 18096 Views: 3,502 

Hi kms.srinivas If at GMR=0.95 and CV=0.25 calculation has to be carried out at N = 2830 subjects, and you carry out on 50 subjects, there can be questions on forced bioequivalence Regards 
Helmut Hero Vienna, Austria, 20171227 12:57 @ Yura Posting: # 18099 Views: 3,438 

Hi Yura, » If at GMR=0.95 and CV=0.25 calculation has to be carried out at N = 2830 subjects, and you carry out on 50 subjects, there can be questions on forced bioequivalence If I got your example correctly, 28–30 subjects mean 81–83% power. In some guidelines 80–90% power are recommended. 90% power would require 38 subjects. 50 subjects would mean 96% power (BEproff’s “big wallet”). However, such a sample size should concern only the IEC (assessing the risk for the study participants). If the protocol was approved by the IEC and the competent regulatory agency, performed as planned and by chance show an even higher “power”, an eyebrow of the assessor might be raised but technically (Type I Error is controlled) there is no reason to question the study. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Yura Regular Belarus, 20171227 13:33 @ Helmut Posting: # 18101 Views: 3,401 

Hi Helmut, » If the protocol was approved by the IEC and the competent regulatory agency yes, explanations are needed for estimating the sample size and design (that to reduce it) 
Helmut Hero Vienna, Austria, 20171227 14:31 @ Yura Posting: # 18102 Views: 3,375 

Hi Yura, » » If the protocol was approved by the IEC and the competent regulatory agency » » yes, explanations are needed for estimating the sample size and design (that to reduce it) Sure. But once explanations (better: assumptions) were accepted (i.e., the protocol approved) that’s the end of the story. That’s why I’m extremely wary of using the term “forced bioequivalence” in a regulatory context (see also this post and followings). IMHO, post hoc power is crap. The chance to get one which is higher than planned is ~50%. Run my simulation code and at the end:
With your example (CV 0.25, GMR 0.95, target power 80%, n 28, 10^{5} simulations):
I suggest to reserve this term to the context of study planning.
— Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Yura Regular Belarus, 20171228 06:50 @ Helmut Posting: # 18108 Views: 3,186 

Hi Helmut, » Although the study was performed with 28 subjects for ~81% power, the chance to get a post hoc power of ≥ 90% is ~23% and ≥ 95% is ~10%. That’s clearly not “forced BE” and none of these studies should be questioned by regulators. You are considering the power after the study at n = 28 (which were calculated before the study: GMR=0.95, CV=0.25, β=0.80). The question is whether it is possible to carry out a study at n = 50 and will this be forced bioequivalence? Regards 
Helmut Hero Vienna, Austria, 20171228 11:30 @ Yura Posting: # 18111 Views: 3,178 

Hi Yura, » You are considering the power after the study at n = 28 (which were calculated before the study: GMR=0.95, CV=0.25, β=0.80). The question is whether it is possible to carry out a study at n = 50 and will this be forced bioequivalence? As I wrote above the IEC and the authority should judge this before the study is done. I agree that in many cases the statistical knowledge of IECs is limited. However, once the protocol was approved by both, I don’t see a reason to talk about “forced BE” any more. BTW, I don’t see a problem if a study is designed for 90% power (80% is not carved in stone). Let’s assume a dropout rate of 15% and we will already end up with 46 subjects:
Considering your example and assuming that the GMR and CV turn out exactly as assumed, no dropouts (n=50): The 90% CI will be 87.47–103.18%. Fine with me. Not even a significant difference (100% included). If the dropoutrate is as expected (n=38) the 90% CI will be 86.36–104.51%. If the assessor is not happy with that, he should have a chat with his colleague who approved the protocol and enlighten him about potential “overpowering” in study planing. According to all guidelines (CI within the acceptance range) I can’t imagine a justification to reject the study. If the study is not accepted only due to the high sample size in the EEA the applicant might go for a referral (with extremely high chances of success) and in the USA the FDA will be sued right away. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut Hero Vienna, Austria, 20171227 12:23 @ kms.srinivas Posting: # 18097 Views: 3,431 

» what about regulatory queries on "unintentional forced Bioequivalence" What do you mean by unintentional? Again: Stop estimating post hoc power! Either the study demonstrated BE or not.* Going back to my example (study planned for 90% power): The chance to obtain a post hoc power of ≥95% is ~35%. Now what? It only means that
I still think that you calculations are wrong. Therefore, you are facing high values more often. Would you mind giving us the data ElMaestro and I asked for?
— Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
kms.srinivas Junior India, 20171227 12:41 (edited by kms.srinivas on 20171227 13:12) @ Helmut Posting: # 18098 Views: 3,456 

Dear Helmut, Much pleasure with your reply. » study planned for 90% power: The chance to obtain a post hoc power of ≥95% is ~35%. Is it a thumb rule? how it should be calculated? » It only means that » – the CV was lower and/or » – the GMR closer to 1 and/or » – you had less dropouts than assumed. Yes, on getting posthoc analysis, i found three aspects what you said: After experiment, it came to know that 1. CV getting lower 2. GMR close to 1 3. No dropouts/withdrawls (though prior consideration of 10% dropouts) 
Helmut Hero Vienna, Austria, 20171227 13:02 @ kms.srinivas Posting: # 18100 Views: 3,379 

Hi kms, » » study planned for 90% power: The chance to obtain a post hoc power of ≥95% is ~35%. » » Is it a thumb rule? No; obtained in simulations by the Rcode I posted above. There is only one – rather trivial – rule of thumb: The chance to get a post hoc power which is either lower or higher than the target is ~50%. » how it should be calculated? Once you performed the simulations, use
Adapt the relevant data according to your needs. For CV 0.30, T/R 0.90, and target power 80% you would get only 7.77% in the range 0.95–0.99 and 8.64% ≥0.95. » Yes, on getting posthoc analysis, i found three aspects what you said: » After experiment, it came to know that » 1. CV getting lower » 2. GMR close to 1 » 3. No dropouts/withdrawls (though prior consideration of 10% dropouts) Fine. Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro: » » » Try and ask yourself which question posthoc power actually answers. Try and formulate it in a very specific sentence. For the 5^{th} time (already asked #1, #2, #3, #4): An example would help. We tried to answer your questions. It would be nice if you answer ours as well. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
kms.srinivas Junior India, 20171228 05:53 @ Helmut Posting: # 18107 Views: 3,208 

Dear Helmut, » For the 5^{th} time (already asked #1, #2, #3, #4): An example would help. » We tried to answer your questions. It would be nice if you answer ours as well. These are the results i'm finding maximum times. Please find the example: Before Study: Desired Power:80% Alpha:5% ISCV:20% GMR:110 * No. of subjects:36(32+4 by considering 10% dropouts) No. of subjects completed the study:34 After Study: Achieved Power:99% Alpha:5% ISCV:18.36% GMR:96 BE Limits are as usual for 80 to 125. Kindly be clarified. 
Helmut Hero Vienna, Austria, 20171228 11:47 @ kms.srinivas Posting: # 18112 Views: 3,204 

Hi kms, » These are the results i'm finding maximum times. THX for providing the data. I guess you mean maximum concentrations? When doubting your calculations I stand corrected! library(PowerTOST) Gives: +++++++++++ Equivalence test  TOST +++++++++++ Modifying my simulation code for theta0=1/1.1 and n=34 I got:1e+05 simulated studies with “post hoc” power of Hence, such a high power is unlikely but possible. As usual power is more sensitive to changes in the GMR than in the CV: pwr.0 < power.TOST(CV=0.2, theta0=1/1.1, n=34) One question remains open: » Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro: » » » » Try and ask yourself which question posthoc power actually answers. Try and formulate it in a very specific sentence. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
DavidManteigas Regular Portugal, 20171228 16:59 @ Helmut Posting: # 18114 Views: 3,066 

Hi all, I think the answer to the first question of this post is "because you were very pessimistic on your assumptions regarding sample size" which is something very common in BABE trials (at least, this is my perception). In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% having in consideration the initial assumptions, post hoc power means nothing. But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think? From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05. Regards, David 
Helmut Hero Vienna, Austria, 20171228 17:33 @ DavidManteigas Posting: # 18115 Views: 3,093 

Hi David, » I think the answer to the first question of this post is "because you were very pessimistic on your assumptions regarding sample size" which is something very common in BABE trials (at least, this is my perception). Haha! I got too many failed studies on my desk and my clients think that I’m Jesus and can reanimate a corpse… In most cases they were overly optimistic in designing their studies. » In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% … ~15%! » … having in consideration the initial assumptions, post hoc power means nothing. Yep. » But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think? Agree. » From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05. Well, the current GL is poorly written. Talks only about an “appropriate sample size calculation” [sic]. The 2001 NfG was more clear: The number of subjects required is determined by
I’m not a pessimist, I’m just a well informed optimist. José Saramago To call the statistician after the experiment is done may be no more than asking him to perform a postmortem examination: he may be able to say what the experiment died of. R.A. Fisher OK, I make money acting as a coroner. Wasn’t really successful in the reanimationattempts. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20171228 18:57 @ Helmut Posting: # 18117 Views: 3,017 

Dear Helmut, » ...Taking into account that the analytical method used for measuring the content of test and referencebatches has limited accuracy/precision (2.5% is excellent!) and the method is not validated for the reference (you can ask the innovator for a CoA but never ever will get it) 0.95 might be “normal” ... "Normal" is a question one could debate for hours, probably ending in a flame war . For me 0.95 or 1/0.95 is as "normal" like setting alpha = 0.05. It's a convention to be used if nothing specific about the GMR is known. Nothing more. And it seemed mostly to work over the years I have observed the use of this setting. Of course it is not a natural constant. Clinically relevant difference (aka GMR in bioequivalence studies): That which is used to justify the sample size but will be claimed to have been used to find it. Stephen Senn » ... but IMHO, optimistic even if you measure a content of 100% for both T and R. Given that power is most sensitive to the GMR I question the usefulness of 0.95. Any other suggestion instead of 0.95 — Regards, Detlew 
mittyri Senior Russia, 20171228 22:06 @ d_labes Posting: # 18120 Views: 2,998 

Dear Detlew, Dear Helmut, Let me play the devil’s advocate. Let's say you have to register some new generic pharmaceutical product and you have the data of pilot study which says that GMR is about 115% with CV about 20% (n=20). The ISCV data is mainly in accordance with literature sources (1826%). Again, a man in Harmani suite as your Boss says it MUST be registered and you have only current product to test. So you are interested to register the drug (to show the bioeqivalence). What would be your recommendation for sample size? How would you estimate it? — Kind regards, Mittyri 
Helmut Hero Vienna, Austria, 20171228 22:33 @ mittyri Posting: # 18122 Views: 2,995 

Hi mittyri, » Let me play the devil’s advocate. Love it. » Again, a man in Harmani suite as your Boss says it MUST be registered and you have only current product to test. So you are interested to register the drug (to show the bioeqivalence). » What would be your recommendation for sample size? How would you estimate it? “Must be registered” is a bit thick! Try…
Seriously: First I would check a CI of the CV observed in the pilot. I would use an α of 0.2.
— Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut Hero Vienna, Austria, 20171228 22:10 @ d_labes Posting: # 18121 Views: 3,030 

Dear Detlew, » "Normal" is a question one could debate for hours, probably ending in a flame war . » For me 0.95 or 1/0.95 is as "normal" like setting alpha = 0.05. Not for me. The former is an assumption whereas the latter fixed by the authority. » It's a convention to be used if nothing specific about the GMR is known. Nothing more. Agree. » And it seemed mostly to work over the years I have observed the use of this setting. » Of course it is not a natural constant. Agree again. » » ... but IMHO, optimistic even if you measure a content of 100% for both T and R. Given that power is most sensitive to the GMR I question the usefulness of 0.95. » » Any other suggestion instead of 0.95 Communication with the analytical staff & common sense. The GL tells us that the T and Rbatches should not differ more than 5% in their contents. I’m too lazy to browse through my protocols but IIRC, on the average it was 2–3%. Now add the analytical (in)accuracy and – conservatively assuming that the error may be on opposite sides – you easily end up with a GMR worse than 0.95. Measuring content is not always trivial. Some MRproducts are difficult and most topical products a nightmare. Extracting a lipophilic drug from a cream full of emulsifiers is If I have to deal with a simple IRproduct, the analytical method is very good, and the difference small I’m fine with 0.95 as well. In general I prefer conservative assumption(s) over optimistic ones. With the former (if they were false) you may have burned money but have a study which passed. With the latter sometimes you have to perform yet another study. Not economic on the long run. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20171228 17:41 @ DavidManteigas Posting: # 18116 Views: 3,105 

Dear David, » In this case, given your simulations above and the expected probability of approximately 12% of the studies having power greater then 95% having in consideration the initial assumptions, post hoc power means nothing. Full ACK. But not only in this case, also in other cases. The concept of posthoc power is flawed by its own. » But if you had 100 studies instead and 90% of the had >95% power although the sample size was calculated assuming expected power of 80%, some questions and conclusions might be drawn from those results, don't you think? From my understanding of the initial question, this was the case found. So I think that they should start by reviewing how they define their assumptions for the sample size, namely why they assume GMR=1.10 instead of the "normal" 0.95/1.05. Again: Full ACK. — Regards, Detlew 
kms.srinivas Junior India, 20171229 13:20 (edited by kms.srinivas on 20171229 13:55) @ DavidManteigas Posting: # 18123 Views: 2,913 

» In this case, why they assume GMR=1.10 instead of the "normal" 0.95/1.05. Yes. This was an example what i was trying to give to Helmut & Elmaestro. This is a very rare case, but in General GMR would be taken as 0.95/1.05 as you said, but the power remains same as 95% to 99% 
Helmut Hero Vienna, Austria, 20171229 16:18 @ kms.srinivas Posting: # 18127 Views: 2,833 

Hi kms, » » In this case, why they assume GMR=1.10 instead of the "normal" 0.95/1.05. » » This was […] a very rare case, but in General GMR would be taken as 0.95/1.05 as you said, but the power remains same as 95% to 99% Similar ≠ same. CV 0.20, GMR 1.10, target 80%, n 32 (power 81.01%), no dropouts expected
CV 0.20, GMR 1.05, target 80%, n 18 (power 80.02%), no dropouts expected
I still don’t understand what you wrote above: » […] always it is coming in the interval 95 to 99 and some times 100 BTW, power curves are not symmetric in raw scale but in logscale (see this post). If you are not – very – confident about the sign of ∆, I recommend to use GMR=1–∆ in order to be on the safe side. By this, power is preserved for GMR=1/(1–∆) as well. If you use GMR=1+∆, power for GMR=1–∆ will be insufficient:
In simple words, for ∆ 10% assume a GMR of 0.90 (which will preserve power for GMR up to 1.1111) and for ∆ 5% a GMR of 0.95 (covers up to 1.0526·). If you assume a GMR of 1.10 power will be preserved only down to 0.9090 and with 1.05 down to 0.9524 – not to 0.90 or 0.95 as many probably expect. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Yura Regular Belarus, 20171229 13:46 @ Helmut Posting: # 18125 Views: 2,883 

Hi Helmut, Coolly you use possibilities R 
Astea Regular Russia, 20180120 23:55 @ Helmut Posting: # 18234 Views: 1,943 

Dear all! I was interested how it works on real data. For this purpose I've calculated the power for ~50 real 2x2 successful studies. Of course sample size is too low to make any conclusions but the tendency is pretty similar. The results are below. Please correct me if my reasoning is wrong. The GMR of Cmax follows the lognormal distribution (p=0.123, ShapiroWilk W test), Geometric Mean PE was 0.9824, and including plusminus SD for logtranformed data leads to 0.9351.032. So assuming 9495% in sample size calculation seems to be good at reflecting the expected ratio. Mean CV was 20.5% (median 18.5%). The average a posteriori power was 86.21% (median 97.33%). The distribution was as follows:
≥ target : 73.58% That is in 26.42% successful studies post hoc power was less than 80%. Why not in 50%? I suppose it is connected with two facts: the real number of subjects is always greater than calculated because researches include dropouts and there exists the restricted limit of minimum of subjects in the study. I've reproduced the calculation performed by Helmut for CV=18.5%, GMR=98.24% and 10% and 20%dropout rates. For 10% I got
≥ target : 66.21% For 20%:
≥ target : 79.65% Blue histogram is for real data (nbins=15). The higher rate for power close to 1 in real data should be connected with the redundant number of subjects in studies with low CV (for CV lower than 22% and GMR=0.95 the power would be greater than 80% when the involved number of subjects is more than 24). P.S.
if (length(package[!inst]) > 0) install.packages(package[!inst]) One needs "s" on the end of "package(s)"? cat("Results of", nsims, "simulated studies:\n")); summary(res) extra ")"? Anticipating the question "but why"? First, the R, Detlew's and Helmut's code possibilities are really impressive! And second: to make sure again that a posteriori power is needless thing and it is waste of paper to include it in the report. Why are you teaching your sister bad words? I want her to know them and never repeat. 
mittyri Senior Russia, 20171228 21:52 @ Helmut Posting: # 18119 Views: 3,034 

Hi Helmut, » Fine. Can you explain to us why you performed a “posthoc analysis” at all? What did you want to achieve? To repeat ElMaestro: » » » » Try and ask yourself which question posthoc power actually answers. Try and formulate it in a very specific sentence. hmmm... May be the topicstarter is preparing the reports for Eurasian Economic Union? "  study power analysis (presenting the results for both C_{max} and AUC_{(0t)} in tabular format);  summary and conclusion" It took effect this year. Colleagues will have a lot of fun IMHO. PS: Eurasian Economic Commission – Setting up of Common Market of Pharmaceutical Products link is dead on Guideline page. Didn't find the alternative. May be Yura or Beholder or BEproff or someone else can help. — Kind regards, Mittyri 
Yura Regular Belarus, 20171229 13:41 @ mittyri Posting: # 18124 Views: 2,882 

Hi mittyri, So I understand: imagine the power analysis and draw a conclusion (at low power) that since bioequivalence is established, a posteriori power rating does not affect the findings of the study More interesting question: the pharmacokinetic equation and its analysis ... Kind regards 
mittyri Senior Russia, 20171229 14:11 @ Yura Posting: # 18126 Views: 2,848 

Hi Yura, » So I understand: imagine the power analysis and draw a conclusion (at low power) that since bioequivalence is established, a posteriori power rating does not affect the findings of the study Yep. Once again: to be lucky is not a crime and vice versa: if the posthoc power is > 100%, you're a hero even if the study was overpowered de facto (i.e. CI [0.981.03]). » More interesting question: the pharmacokinetic equation and its analysis ... Good question! Next question! Seems like they are interested in PK model which is not permitted. No clue so far — Kind regards, Mittyri 
Beholder Regular Russia, 20180116 15:10 @ mittyri Posting: # 18184 Views: 2,207 

Hi Mittyri, » PS:Eurasian Economic Commission – Setting up of Common Market of Pharmaceutical Products link is dead on Guideline page. Didn't find the alternative. » ...May be Yura or Beholder or BEproff or someone else can help. Let me try. If you meant this link than it is still alive and allows to open Decisions. Also one can use ConsultantPlus in order to find EEU Decisions but after 20.00 (Moscow time) only. Nevertheless our neighbours from Belarus have all Decisions uploaded to Expertise Center site. — Best regards Beholder 
xtianbadillo Junior Mexico, 20180118 22:22 @ BEproff Posting: # 18219 Views: 2,072 

» Is it worth to calculate sample size at power of 95%? » Are there any risks? Too many subjects It is unethical to disturb more subjects than necessary Some subjects at risk and they are not necessary It is an unnecessary waste of some resources ($) It takes more time to analytical sample analysis  time is money < this does not a apply to a rich client Too few subjects A study unable to reach its objective is unethical All subjects at risk for nothing All resources ($) is wasted when the study is inconclusive 
ElMaestro Hero Denmark, 20171228 20:13 @ kms.srinivas Posting: # 18118 Views: 3,012 

Dear all, today I was flipping a coin two times and recording the number of times I got tails. I defined success as any outcome associated with two tails. I got two tails and the experiment was a success. What was the power, given the outcome? At this point it is clearly experimentally proven that the coin only shows tails, so power was obviously 100%. Some of you may insist that I was just "lucky", whatever the hell that means. To this backward perception I must clearly refer to the truth illustrated by the numbers: This was not luck, it was meant to happen by way of the nature of the coin. Numbers don't lie. Humans do. Don't believe all the rubbish from the usual subjects in this thread, like Chewbacca and The Berlin Chimney. They are clearly not so well connected with real life as I am. It is incredible what some people can get away with nowadays. — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 