Helmut Hero Vienna, Austria, 20170711 21:34 Posting: # 17535 Views: 1,833 

Dear all, I’m asking myself how to interpret this part of the BEGL in the section Subject accountability: Ideally, all treated subjects should be included in the statistical analysis. However, subjects in a crossover trial who do not provide evaluable data for both of the test and reference products […] should not be included. Funny that the first sentence describes an “ideal” situation which can be handled only with a mixed effects model – which at least for conventional crossovers is taboo.I would say that the second sentence was written having nonreplicated crossovers in mind. We discussed in the forum whether subjects with only RRdata (say due to dropouts in the 3^{rd} period of a partial replicate design in sequence RRT) should be included for the estimation of CV_{wR} and the consensus was: yes. I simulated a small partial replicate with s_{wT} = s_{wR} = 0.3, s_{wT} = s_{wR} = 1 and removed the last observation of subject 18 in sequence RRT:
Method DF CVwR L U 90% CI PE CI GMR mixed log½ The log halfwidth (log½) is useful in comparison methods, where a higher value points to a more conservative decision (wider CI). As usual Method C is the most conservative one but not ”compatible with the guideline”. The Q&A states “… it will generally give wider [sic] confidence intervals than those produced by methods A and B.”. Applicants love it when an agency recommends a liberal method. Further down in the Q&A: For replicate designs the results from the two approaches [A and B] will differ if there are subjects included in the analysis who do not provide data for all treatment periods. Either approach is considered scientifically acceptable, but for regulatory consistency it is considered desirable to see the same type of analysis across all applications. Reading between the lines I got the impression that the EMA believes that Method A is always more conservative than Method B. This is not correct. Data set upon request. What do you think? What do you do? — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
zizou Junior Plzeň, Czech Republic, 20170712 23:12 @ Helmut Posting: # 17536 Views: 1,475 

Dear Helmut, » I’m asking myself how to interpret this part of the BEGL in the section Subject accountability: Ideally, all treated subjects should be included in the statistical analysis. However, subjects in a crossover trial who do not provide evaluable data for both of the test and reference products […] should not be included. Funny that the first sentence describes an “ideal” situation which can be handled only with a mixed effects model – which at least for conventional crossovers is taboo.When subject is vomiting in the first period early after administration and then has e.g. flu. Neither mixed can't help. » I would say that the second sentence was written having nonreplicated crossovers in mind. Who knows... EMA mentioned it in Q&A (page 15) also: The question of whether to use fixed or random effects is not important for the standard two period, two sequence (2×2) crossover trial. In section 4.1.8 of the guideline it is stated that “subjects in a crossover trial who do not provide evaluable data for both of the test and reference products should not be included.” Provided this is followed the confidence intervals for the formulation effect will be the same regardless of whether fixed or random effects are used. So maybe not. With interest in intrasubject variability I think that test and reference formulations will differ more than only two references in the most of the subjects. Hence the exclusion of subject without T and R data concurrently (i.e. the exclusion of subject with only RR data) will have the impact that the intrasubject CV of all (pooled) data would be higher (more probably). It's because we exclude subject with two RR which probably don't differ so much as T versus R. At least I guess and I see the point in that. To be on the safe side  exclude all these RR subjects from T/R evaluation (as these subjects could affect the intrasubject CV to be lower "incorrectly" based on RR differences of such individual subjects, i.e. affect the 90% CI to be narrower).Of course it can happen as in your data example that CV_{W} will be lower (i.e. 90% CI narrower) after exclusion of only RR subject(s) but I would expect the opposite really more often. » We discussed in the forum whether subjects with only RRdata (...) should be included for the estimation of CV_{wR} and the consensus was: yes. Yes from me too if the subject is not outlier  which is other discussed topic with no guideline with definition of the outlier. (I am thinking in method A as the simplest method to think about and as EMA.) The comparison of methods is over my capability (especially method C which is mixed with specific settings which is not available in many statistical softwares(?) and I didn't give much time to method C myself to get all points in that method mainly because of method A is preffered by EMA). Btw. almost everytime I open the FDA draft of progesterone guidance my eyes notice the line (I highlighted it to red) with a little wow: proc glm data=scavbe; Note: Red color doesn't mean wrong here. With the fact that FDA do not round off the 90% CI limits it is quite shocking that precision on 10 decimal places is enough here! I think more beautiful and more accuracy would be: estimate 'average' intercept 3 seq 1 1 1 / divisor=3; (I don't have SAS power so if I am wrong please don't shame me.) A little fun. I know that in mixed methods there are parameters for convergence criteria (e.g. 0.0000000001) or maximum count of iterations which maybe have also little influence on the precision (?) when 90% CI limit is on the board of 80 or 125 %. (Not so easy as EMA method A.) What if with default settings BE fails and with more precision we get into 80125%. When BE is recalculated by regulatory (e.g. FDA) with default settings and the result is fail, then BE is challenge. x) Best regards, zizou 
Helmut Hero Vienna, Austria, 20170713 20:32 @ zizou Posting: # 17543 Views: 1,481 

Hi zizou, partly answering; have to chew on some other points… » The comparison of methods is over my capability (especially method C which is mixed with specific settings which is not available in many statistical softwares(?) SAS (and JMP = poor man’s SAS), Phoenix WinNonlin, STaTa, … » Btw. almost everytime I open the FDA draft of progesterone guidance my eyes notice the line (I highlighted it to red) with a little wow: » estimate 'average' intercept 1 seq 0.3333333333 0.3333333333 0.3333333333; Yes, it hurts. » With the fact that FDA do not round off the 90% CI limits … Oh, rounding is the FDA’s requirement for ages. Actually the EMA followed this bad practice. » I think more beautiful and more accuracy would be: » estimate 'average' intercept 3 seq 1 1 1 / divisor=3; » (I don't have SAS power so if I am wrong please don't shame me.) I don’t speak SAS either but I’m sure you looked it up in the online manual. » I know that in mixed methods there are parameters for convergence criteria (e.g. 0.0000000001) or maximum count of iterations which maybe have also little influence on the precision (?) when 90% CI limit is on the board of 80 or 125 %. (Not so easy as EMA method A.) » What if with default settings BE fails and with more precision we get into 80125%. When BE is recalculated by regulatory (e.g. FDA) with default settings and the result is fail, then BE is challenge. Correct & good point! Never seen a failure in practice* but it is interesting what the Canadians have to say: By definition [!] the crossover design is a mixed effects model [!] with fixed and random effects. The basic two period crossover can be analysed according to a simple fixed effects model and least squares means estimation. Identical results will be obtained from a mixed effects analysis such as Proc Mixed in SAS^{®}. If the mixed model approach is used, parameter constraints should be defined in the protocol. Higher order models must be [!] analysed with the mixed model approach in order to estimate random effects properly. (my emphases)
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 