Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log-in |  Register |  Search

Back to the forum  Query: 2017-10-17 00:07 UTC (UTC+2h)

forced BE? [Design Issues]

posted by DavidManteigas - Portugal, 2017-08-07 12:02  - Posting: # 17670
Views: 902

Hi helmut

» The method by which the sample size is calculated should be given in the protocol, together with the estimates of any quantities used in the calculations (such as variances, mean values, response rates, event rates, difference to be detected). The basis of these estimates should also be given. It is important to investigate the sensitivity of the sample size estimate to a variety of deviations from these assumptions and this may be facilitated by providing a range of sample sizes appropriate for a reasonable range of deviations from assumptions. In confirmatory trials, assumptions should normally be based on published data or on the results of earlier trials. […] Conventionally the probability of type I error is set at 5% or less or as dictated by any adjustments made necessary for multiplicity considerations; the precise choice may be influenced by the prior plausibility of the hypothesis under test and the desired impact of the results. The probability of type II error is conventionally set at 10% to 20%; it is in the sponsor’s interest to keep this figure as low as feasible especially in the case of trials that are difficult or impossible to repeat.
» Did you ever see that in a protocol? I didn’t.

Never saw it either, although at the planning stage a lot of sensitivity analysis are done (at least, in the studies I work with)

» Power is much more sensitive to the GMR than to the CV. Maybe looking at it the other way ’round would be better. If you are not a hard-core frequentist, going Bayesian might be an option.

Not hard-core frequentist, but going bayesian is still a crime for some statisticians and I would only go that way if other alternatives are not available :-D

» Again: Post hoc power. How relevant is it?
» IMHO, ECs should simply follow ICH E9. Which are the assumptions, how sensitive is the sample size to deviations, …? It is the job of the EC to protect the health of subjects in the study. Hence, it is desirable to keep the sample size as small as possible but as large as necessary.

It might be relevant, if you note a trend in approved studies: for instance, if most of them are significantly (still to be defined what might be significantly) overpowered and the assumptions defined in sample size calculation are far way from the observed parameters, I think this might be an issue that should handled and it is being neglected. By the regulators, which should be (again, imo) more open to share their information on CV and GMR of already approved drugs, and by the sponsors that should present strong justifications (as per ICH E9) on their protocols, such as sensitivity analysis and references that support their assumptions (other than "available literature" and "in-house data") and should look more carefuly to other design strategies available instead of just plugging a CV based on intuition. Otherwise, we are assuming that sample size is magic performed by the statistician to acchieve the maximum budget of the project :-D

Complete thread:

Back to the forum Activity
 Mix view
Bioequivalence and Bioavailability Forum | Admin contact
17,394 Posts in 3,725 Threads, 1,071 registered users;
29 users online (0 registered, 29 guests).

The rise of biometry in this 20th century,
like that of geometry in the 3rd century before Christ,
seems to mark out one of the great ages or critical periods
in the advance of the human understanding.    R.A. Fisher

BEBAC Ing. Helmut Schütz