Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log-in |  Register |  Search

Back to the forum  Query: 2017-06-27 02:11 CEST (UTC+2h)
 
Helmut
Hero
Homepage
Vienna, Austria,
2013-04-16 17:40

Posting: # 10424
Views: 13,890
 

 Potvin C in the EU [Two-Stage / GS Designs]

Dear all,

new experiences from an ongoing MRP. Two studies (first one in two groups due to logistic reasons), Method C (!), all metrics in both studies passed in stage 1 (>80% power in the interim, no α-adjustment = 90% CIs).

Studies’ models in stage 1:
  1. Fixed:  sequence, period, treatment, group, group × treatment
    Random: subject(sequence)

  2. Fixed:  sequence, period, treatment
    Random: subject(sequence)
Accepted by Germany (RMS); no comments from Austria, Denmark, Sweden, and The Netherlands. Spain:

“Statistical analysis should be GLM. Please justify. Please provide the NOL.”

OK, I will repeat the analyses with all fixed effects (sigh; identical CIs…). ;-)
BTW, does anybody know what “NOL” means?

P.S.: All metrics would have passed with αadj 0.0294 (Method B) as well. I did not report these results and interestingly surprisingly was not asked for them.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
Hero

Denmark,
2013-04-17 02:31

@ Helmut
Posting: # 10425
Views: 12,317
 

 Potvin C in the EU

Hi Hötzi,

» new experiences from an ongoing MRP. Two studies (first one in two groups due to logistic reasons), Method C (!), all metrics in both studies passed in stage 1 (>80% power in the interim, no α-adjustment = 90% CIs).

Ahhh... MRP - a relic from the bygone ages when your grandad walked around in the snowy alps with bow and arrows looking for deer.

» Studies’ models in stage 1:
  1. Fixed:  sequence, period, treatment, group, group × treatment
    » Random: subject(sequence)
    »
  2. Fixed:  sequence, period, treatment
    » Random: subject(sequence)
Accepted by Germany (RMS); no comments from Austria, Denmark, Sweden, and The Netherlands. Spain: “Statistical analysis should be GLM. Please justify. Please provide the NOL.”
» OK, I will repeat the analyses with all fixed effects (sigh; identical CIs…). ;-)

Anything to please the assessor. I have a feeling the assessor means LM but might not know what that really means since every LM in SAS is done via PROC GLM.
But honestly, group x treatment?? I'd rather not go down that road. I hope you do not make precedence.

» BTW, does anybody know what “NOL” means?

I am fairly sure it is either "No Objection Lertificate" or "Nixon's Obstructive Leprosy". I don't see what else it could possibly be.

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
Helmut
Hero
Homepage
Vienna, Austria,
2013-04-17 12:23

@ ElMaestro
Posting: # 10426
Views: 12,319
 

 Potvin C in the EU

My Capt’n!

» MRP - a relic from the bygone ages when your grandad walked around in the snowy alps with bow and arrows looking for deer.

:-D

» Anything to please the assessor. I have a feeling the assessor means LM but might not know what that really means since every LM in SAS is done via PROC GLM.

Yep.

» But honestly, group x treatment?? I'd rather not go down that road. I hope you do not make precedence.

Not my invention. See this post for the reference.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2013-04-17 13:10

@ Helmut
Posting: # 10427
Views: 12,423
 

 2 Groups model FDA

Gents!

» » But honestly, group x treatment?? I'd rather not go down that road. I hope you do not make precedence.
»
» Not my invention. See this post for the reference.

I have another 'better' one (from a Letter of FDA, Barbara M. Davit)! :cool:

" ... the following statistical model should be used when analyzing data from a multiple group bioequivalence study:

Group
Sequence
Treatment
Subject(nested within Group*Sequence)
Period(nested within Group)
Group-by-Sequence Interaction (sic!)
Group-by-Treatment Interaction


Subject(nested within Group*Sequence) is a random effect and all other factors are fixed effects. ..."

Enormous effect(s) :-D.

BTW: Helmut, how had you planned the evaluation of the second stage, if it would be necessary?

BTW2: In context of a 2-Stage design, also a 2-group design with groups here called stage, the mighty oracle prohibit us the Group-by-Treatment Interaction! From the EMA Q&A Rev.7 :
"Conclusion ... A term for a formulation*stage interaction should not be fitted."

BTW3: NOL = no objection letter ?

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2013-04-17 16:37

@ d_labes
Posting: # 10428
Views: 12,718
 

 2 Groups model FDA

Hi Detlew!

» I have another 'better' one (from a Letter of FDA, Barbara M. Davit)! :cool:
» […]
»   Group
»   Sequence
»   Treatment
»   Subject(nested within Group*Sequence)
»   Period(nested within Group)
»   Group-by-Sequence Interaction (sic!)
»   Group-by-Treatment Interaction
» Subject(nested within Group*Sequence) is a random effect and all other factors are fixed effects. ..."
»
» Enormous effect(s) :-D.

Wow, this will send ElMaestro’s Silly-o-Meter into nirvana!
I have seen some FDA code a good while ago:
PROC GLM DATA = DATASET;
  CLASS SUB TRT PER SEQ GRP;
  MODEL Y = GRP SEQ SEQ*GRP SUBJ(SEQ*GRP) PER(GRP) TRT TRT*GRP /SS3;
  RANDOM SUBJ(SEQ*GRP);
  TEST H = SEQ GRP E = SUBJ(SEQ*GRP)/HTYPE=3 ETYPE=3;
  ESTIMATE 'T VS R' TRT 1 -1;
  LSMEANS TRT/STDERR PDIFF CL ALPHA=.1;
RUN;

Followed by:

If the Group-by-treatment interaction test is not statistically significant (p ≥0.1), only the Group-by-treatment term can be dropped from the model.
If the Group-by-treatment interaction is statistically significant (p <0.1), DBE requests that equivalence be demonstrated in one of the groups, provided that the group meets minimum requirements for a complete bioequivalence study.
Please note that the statistical analysis for bioequivalence studies dosed in more than one group should commence only after all subjects have been dosed and all pharmacokinetic parameters have been calculated. Statistical analysis to determine bioequivalence within each dosing group should never be initiated prior to dosing the next group; otherwise the study becomes one of sequential design.


EMA & random effects… :crying: Phoenix gives me headaches if all effects are fixed (remember this thread?)…

» BTW: Helmut, how had you planned the evaluation of the second stage, if it would be necessary?

Probably too naïvely:
fixed:  sequence, treatment, stage, period, group,
        group × treatment, stage × treatment
random: subject(sequence × stage)


» BTW2: In context of a 2-Stage design, also a 2-group design with groups here called stage, the mighty oracle prohibit us the Group-by-Treatment Interaction! From the EMA Q&A Rev.7 :
» "Conclusion ... A term for a formulation*stage interaction should not be fitted."

Ah-ja!

» BTW3: NOL = no objection letter ?

Good idea. The reports contained a copy of the IEC-approval and a section “Notification to Health Authorities”:

All approvals needed for conducting the trial were obtained; the trial was performed in accordance with the German Medicines Act (AMG). The study was approved by the BfArM on DD Mon YYYY. The study notification to the BfArM [§ 42 (2) AMG,1 § 9 (1) GCP-V] was performed by ███ on DD Mon YYY. Local authorities were notified according to § 67 AMG (‘█████’ for the clinical site ███ and ‘█████’ for the sponsor). On DD Mon YYYY end of the trial was reported by ███ to the BfArM.

Will submit a copy of BfArM’s approvals (protocol, study notification).

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2013-04-18 09:55

@ Helmut
Posting: # 10431
Views: 12,185
 

 Group effects obsolete?

Dear Helmut!

» I have seen some FDA code a good while ago:
» PROC GLM DATA = DATASET;
»   CLASS SUB TRT PER SEQ GRP;
»   MODEL Y = GRP SEQ SEQ*GRP SUBJ(SEQ*GRP) PER(GRP) TRT TRT*GRP /SS3;
»   RANDOM SUBJ(SEQ*GRP);
»   TEST H = SEQ GRP E = SUBJ(SEQ*GRP)/HTYPE=3 ETYPE=3;
»   ESTIMATE 'T VS R' TRT 1 -1;
»   LSMEANS TRT/STDERR PDIFF CL ALPHA=.1;
» RUN;

Yep. Implementation of what is written in the letter.
But IMHO the RANDOM statement is superfluous since it doesn't do anything other than printing a symbolic representation of expected mean squares. If one is really interested in having a random effect for subj(seq*group) Proc MIXED should be used.

» Followed by:
» ...
Same wording in the letter I have.

But the most interesting part of my letter:
"If all of the following criteria are met, it may not be necessary to test for group effects in the model:
  • the clinical study takes place at one site;
  • all study subjects have been recruited from the same enrollment pool;
  • all of the subjects have similar demographics; and
  • all enrolled subjects are randomly assigned to treatment groups at study outset
In this latter case, the appropriate statistical model would include only the factors Sequence, Period, Treatment and Subject(nested within sequence."


Can't imagine BE studies where this is not true, except multi-center studies, but these are rare for BE evaluation.
So what? Group effects are a Chimera?

» » BTW: Helmut, how had you planned the evaluation of the second stage, if it would be necessary?
»
» Probably too naïvely:
» fixed:  sequence, treatment, stage, period, group,
»         group × treatment, stage × treatment
» random: subject(sequence × stage)

Too naïvely may be true :-D.

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2013-04-19 14:34

@ d_labes
Posting: # 10438
Views: 12,187
 

 Group effects FDA/EMA

Dear Detlew!

» But the most interesting part of my letter:
» "If all of the following criteria are met, it may not be necessary to test for group effects in the model:
»
» the clinical study takes place at one site;
» all study subjects have been recruited from the same enrollment pool;
» all of the subjects have similar demographics; and
» all enrolled subjects are randomly assigned to treatment groups at study outset
»
» In this latter case, the appropriate statistical model would include only the factors Sequence, Period, Treatment and Subject(nested within sequence."
»
» Can't imagine BE studies where this is not true, …

Interestingly FDA’s Stat Guidance (Section VII.A.) states

If a crossover study is carried out in two or more groups of subjects […], the statistical model should be modified to reflect the multigroup nature of the study. In particular, the model should reflect the fact that the periods for the first group are different from the periods for the second group. This applies to all of the approaches (average, population, and individual BE) described in this guidance.

(my emphases)

» … except multi-center studies, but these are rare for BE evaluation.

Yep.

» So what? Group effects are a Chimera?

Probably. What about EMA’s BE GL (Section 4.1.8)?

The statistical analysis should take into account sources of variation that can be reasonably [sic] assumed to have an effect on the response variable.

Group = not a reasonable effect?

» Too naïvely may be true :-D.

Alter schützt vor Torheit nicht. ;-)

[image]

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
mittyri
Senior

Russia,
2014-08-21 15:34

@ Helmut
Posting: # 13416
Views: 10,053
 

 Group effects FDA/EMA

Dear Helmut,

» Group = not a reasonable effect?

As I understand correct, no one can answer on this question...
I've read some topics about group effect in the statistical model, by the way I still have some questions
1. Is it obligatory EMA's requirement to use Group effect in the model in all cases when there is more than one group? Any experience of LoDs due to absence of Group effect in the statistical model?

2. What kind of statistical model is preffered for Europe submission?
According to the Detlew's post
» In context of a 2-Stage design, also a 2-group design with groups here called stage, the mighty oracle prohibit us the Group-by-Treatment Interaction! From the EMA Q&A Rev.7 :
» "Conclusion ... A term for a formulation*stage interaction should not be fitted."
ElMaestro wrotes that
» Although I think the formulation x group interaction would steal a df and would be tested against the model residual (within). You are right that any step towards increasing the model residual translates into producer's risk.

Did you ever use the model without group × treatment?
Something like that:
Fixed: sequence, period, treatment, group, subject(sequence)

3. Did you ever got the requests from EMA experts about calculating the CI into one of the groups as for FDA submision?

Sorry for probable repeating some questions


Edit: I’m off for a three weeks vacation. I’m sure others will have a lot to say in the meantime. ;-) [Helmut]

Kind regards,
Mittyri
ElMaestro
Hero

Denmark,
2014-08-21 17:31

@ mittyri
Posting: # 13417
Views: 10,020
 

 Group effects FDA/EMA

Hello Mittyri,

» 1. Is it obligatory EMA's requirement to use Group effect in the model in all cases when there is more than one group? Any experience of LoDs due to absence of Group effect in the statistical model?

Just include the term.... that can't be bad in a regulatory sense and it can't be a burden to you, right?

» 2. What kind of statistical model is preffered for Europe ..
» Something like that:
» Fixed: sequence, period, treatment, group, subject(sequence)

Sounds right to me, and I've had thumbs up in sc.advices.

» 3. Did you ever got the requests from EMA experts about calculating the CI into one of the groups as for FDA submision?

Not sure what you are actually asking. I have never heard a EU regulator ask for something the FDA want just because of FDA's request in itself.

Hötzi: Enjoy your holidays. I am confident there will be good news in your mehlbox when you return :waving:

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
mittyri
Senior

Russia,
2014-10-01 11:25

@ ElMaestro
Posting: # 13629
Views: 9,518
 

 Group effects EMA

Hello ElMaestro and all,

Thank you for clarification!

1.

» Just include the term.... that can't be bad in a regulatory sense and it can't be a burden to you, right?

Including the group effect to the statistical model leads to widening the CI.
More effects in model mean wider intervals.
Please correct me if I'm wrong.

Kind regards,
Mittyri
ElMaestro
Hero

Denmark,
2014-10-01 11:31

@ mittyri
Posting: # 13630
Views: 9,519
 

 Group effects EMA

Hi Mittyri,

» More effects in model mean wider intervals.
» Please correct me if I'm wrong.

That's kind of incorrect.
The more effects you have in your model, the more variation is accounted for by the totality of terms; the residual will decrease as will be residual's degrees of freedom. Therefore the width of the CI tends to shrink.

It may happen that the inclusion of a term apparently does nothing. This is due to the design where some effect may contain the other; in such situation the residual variance, the residual df and the width of the CI remain unaffected. But you will never see the CI widened due to more factors.

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
Helmut
Hero
Homepage
Vienna, Austria,
2014-10-01 13:24

@ mittyri
Posting: # 13632
Views: 9,482
 

 Group effects EMA

Hi Mittyri,

I agree with our Master. Simple example:

group subject period sequence treatment response
  1      1       1       RT       R        100
  1      1       2       RT       T         95
  1      2       1       TR       T         95
  1      2       2       TR       R         90
  1      3       1       RT       R        105
  1      3       2       RT       T        100
  1      4       1       TR       T        105
  1      4       2       TR       R        100
  2      5       1       RT       R        100
  2      5       2       RT       T         95
  2      6       1       TR       T         95
  2      6       2       TR       R         90
  2      7       1       RT       R         90
  2      7       2       RT       T         85
  2      8       1       TR       T         85
  2      8       2       TR       R         90

  • sequence+period+formulation+subject(sequence)
    98.65% [96.05–101.32%]

  • sequence+period+formulation+group+subject(sequence)
    98.65% [96.05–101.32%]

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
VStus
Regular

Poland,
2016-10-06 22:06

@ Helmut
Posting: # 16701
Views: 6,197
 

 Group effects EMA

Hi Helmut,

I have performed exercise on this dataset in R and have to support Mittyri: addition of more effects beyond group increases MSE and, therefore, CI:

anova(lm(log(response) ~ sequence + subject:sequence + period + treatment, data=data, na.action=na.exclude))["Residuals","Mean Sq"]
[1] 0.00075363
anova(lm(log(response) ~ sequence + subject:sequence + period + treatment + group, data=data, na.action=na.exclude))["Residuals","Mean Sq"]
[1] 0.00075363
anova(lm(log(response) ~ sequence + subject:sequence + period + treatment + group + period:group, data=data, na.action=na.exclude))["Residuals","Mean Sq"]
[1] 0.00078534


Due to my limited R knowledge, I cannot figure out how to add other effects recommended by FDA e.g. subject(sequence x group), but I'm afraid that at least in R, the more related effects - the wider confidence intervals...

Regards, VStus
ElMaestro
Hero

Denmark,
2016-10-07 12:45

@ VStus
Posting: # 16705
Views: 6,085
 

 Group effects EMA

Hi VStus,

» I have performed exercise on this dataset in R and have to support Mittyri: addition of more effects beyond group increases MSE and, therefore, CI: (...)

Hehe, this is a truly remarkable phenomenon.
Inclusion of more factors or more crosses/nestings introduces new columns in the muddle matrix, so the residual variability measured as SS decreases. Nevertheless, the residual variance, MSE or SS divided by DF, may go up as the new columns in the muddle matrix steals a bit from the DF pool. You should see that if you do an anova on your model object.
Quite possibly, you are right in theory the width of the CI could in such cases increase? I did not recalculate it in this case, perhaps that is really possible?!?
I doubt it would happen with biological data, but I see now I should not have stated it like I did in the post above. I apologise for any confusion my post led to.

» Due to my limited R knowledge, I cannot figure out how to add other effects recommended by FDA e.g. subject(sequence x group), but I'm afraid that at least in R, the more related effects - the wider confidence intervals...

You are ok as long as you code subjects uniquely, then it does not matter if you call them Subject or Subject in Sequence in Group etc. Search this forum for the famous Silly-o-meter and it will be crystal clear, of course. :-D:-D:-D:-D:-D





Not?!?

A good weekend to all of you.

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
mittyri
Senior

Russia,
2016-10-07 15:06
(edited by mittyri on 2016-10-07 15:32)

@ ElMaestro
Posting: # 16706
Views: 6,104
 

 Group effects EMA

Hi VStus & ElMaestro!

This is really funny, we had 2 last days discussions with Helmut regarding Group effect during the workshop

Yes, in case of just adding a Group you do not see any changes (Fixed: sequence, period, treatment, group, subject(sequence))
But in case of Period %in% Group the situation is not the same!!

a little code for discussion:
# the original code is from this perfect paper:
# http://link.springer.com/article/10.1208/s12248-014-9661-0
Analyse222BE <- function (data, alpha=0.05, Group = FALSE) {
 
  data$Subj <- factor(data$Subj) # Subject
  data$Per  <- factor(data$Per)  # Period
  data$Seq  <- factor(data$Seq)  # Sequence
  data$Trt  <- factor(data$Trt)  # Treatment
  # be explicite
  ow <- options()
  options(contrasts=c("contr.treatment","contr.poly"))
 
  if(!Group) {
    muddle <- lm(log(Var)~Trt+Per+Seq+Subj, data=data)
  }  else {
    muddle <- lm(log(Var)~Trt+Per+Seq+Subj+
                          Group+Per:Group, data=data)
  }
  # in the standard contrasts option "contr.treatment"
  # the coefficient TrtT is the difference T-R since TrtR is set to zero
  lnPE     <- coef(muddle)["TrtT"]
  lnCI     <- confint(muddle,c("TrtT"), level=1-2*alpha)
  typeI    <- anova(muddle)
  names(typeI)[5]   <- "Pr(>F)"

  # no need for the next
  mse      <- summary(muddle)$sigma^2
  # another possibility:
  #mse <- typeI["Residuals","Mean Sq]
  #df       <- df.residual(muddle)
 
  # back transformation to the original domain
  CV <- 100*sqrt(exp(mse)-1)
  PE <- exp(lnPE)
  CI <- exp(lnCI)
  # output
 
  cat(sep,"\n")
  options(digits=8)
  cat("Type I sum of squares: ")
  print(typeI)

  cat("\nBack-transformed PE and ",100*(1-2*alpha))
  cat("% confidence interval\n")
  cat("CV (%) ..................................:",
      formatC(CV, format="f", digits=2),"\n")
  cat("Point estimate (GMR).(%).................:",
      formatC(100*PE, format="f", digits=2),"\n")
  cat("Lower confidence limit.(%)...............:",
      formatC(100*CI[1], format="f", digits=2) ,"\n")
  cat("Upper confidence limit.(%)...............:",
      formatC(100*CI[2],format="f", digits=2) ,"\n")
  cat(sep,"\n\n")
 
  #reset options
  options(ow)
}

data <- read.delim("https://static-content.springer.com/esm/art%3A10.1208%2Fs12248-014-9661-0/MediaObjects/12248_2014_9661_MOESM1_ESM.txt")
Group1 <- subset(data, Subj <= 9)
Group1$Group <- 1
Group2 <- subset(data, Subj > 9)
Group2$Group <- 2
data<-rbind(Group1, Group2)

Analyse222BE(data, alpha=0.05, Group = FALSE)
Analyse222BE(data, alpha=0.05, Group = TRUE)


> Analyse222BE(data, alpha=0.05, Group = FALSE)
_______________________________________________________________________________
_______________________________________________________________________________
Type I sum of squares: Analysis of Variance Table

Response: log(Var)
          Df  Sum Sq   Mean Sq  F value     Pr(>F)   
Trt        1 0.02285 0.0228494  3.57254   0.076998 . 
Per        1 0.04535 0.0453497  7.09049   0.017019 * 
Seq        1 0.21836 0.2183553 34.14018 2.4964e-05 ***
Subj      16 4.24539 0.2653366 41.48578 5.2285e-10 ***
Residuals 16 0.10233 0.0063958                       
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Back-transformed PE and  90% confidence interval
CV (%) ..................................: 8.01
Point estimate (GMR).(%).................: 95.09
Lower confidence limit.(%)...............: 90.76
Upper confidence limit.(%)...............: 99.62
_______________________________________________________________________________

> Analyse222BE(data, alpha=0.05, Group = TRUE)
_______________________________________________________________________________
_______________________________________________________________________________
Type I sum of squares: Analysis of Variance Table

Response: log(Var)
          Df  Sum Sq   Mean Sq  F value     Pr(>F)   
Trt        1 0.02285 0.0228494  3.42473   0.084021 . 
Per        1 0.04535 0.0453497  6.79711   0.019816 * 
Seq        1 0.21836 0.2183553 32.72757 4.0497e-05 ***
Subj      16 4.24539 0.2653366 39.76923 2.1387e-09 ***
Per:Group  1 0.00225 0.0022549  0.33797   0.569635   
Residuals 15 0.10008 0.0066719                       
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Back-transformed PE and  90% confidence interval
CV (%) ..................................: 8.18
Point estimate (GMR).(%).................: 94.92
Lower confidence limit.(%)...............: 90.47
Upper confidence limit.(%)...............: 99.59
_______________________________________________________________________________

Even PE differs!
What the heck?? Where am I wrong? I need to take some Schützomycin!
PS: I see now, the model with Period:Group is wrong

Kind regards,
Mittyri
mittyri
Senior

Russia,
2016-10-09 13:27

@ mittyri
Posting: # 16708
Views: 5,909
 

 Fixed effects model with Group term

Dear All,

Now I'm using the model from 2-stage design (without Period term):
    muddle <- lm(log(Var)~Group+Seq+Seq*Group+Subj%in%(Seq*Group)+Per%in%Group+Trt, data=data)
    # like in 2 stage design;
    # I know that Subject %in% ... is just a Subject, I didn't change this term for consistency


BTW the question is still there, because the output is the same as above (PE and MSE are different)
Also please note that the Group factor is extremely significant
When I run this dataset for FDA Model I in Phoenix (don't know the code for R):
  Dependent Hypothesis Numer_DF Denom_DF        F_stat     P_value
1   Ln(Var)        int        1       14 3.1047428e+03 0.000000000
2   Ln(Var)      Group        1       14 9.9615453e-01 0.335181899
3   Ln(Var)        Seq        1       14 9.7094813e-01 0.341167964
4   Ln(Var)  Group*Seq        1       14 5.7364354e-02 0.814182229
5   Ln(Var)  Group*Per        2       14 3.2110481e+00 0.071153125
6   Ln(Var)        Trt        1       14 3.4404496e+00 0.084798088
7   Ln(Var)  Trt*Group        1       14 2.3942317e-01 0.632200219

The Group factor is faraway from significant level.

It could be a nightmare for Russia, where the experts insist on the Group term in the model, but all other things should be like in EMA Guideline with all factors as fixed (does someone realize how many studies will be failed? The experts suggest not to pool the groups in this case like the FDA Guidance states!)

Kind regards,
Mittyri
mittyri
Senior

Russia,
2016-10-10 18:33

@ mittyri
Posting: # 16710
Views: 5,891
 

 Fixed effects model: changing the F, p values

Dear All,

I cannot change the previous post, so please find my corrected code
I've found my mistake regarding Fs
As ElMaestro suggested, all beween-subject factors should be assessed vs. MSEsubject:

# http://link.springer.com/article/10.1208/s12248-014-9661-0
Analyse222BE <- function (data, alpha=0.05, Group = FALSE) {
 
 
  data$Per  <- factor(data$Per)  # Period
  data$Seq  <- factor(data$Seq)  # Sequence
  data$Trt  <- factor(data$Trt)  # Treatment
  data$Subj <- factor(data$Subj) # Subject
  # be explicite
  ow <- options()
  options(contrasts=c("contr.treatment","contr.poly"))
  if(!Group) {
    muddle <- lm(log(Var)~Trt+Per+Seq+Subj, data=data) # conventional model
  }  else {
    data$Group  <- factor(data$Group)  # Group
    muddle <- lm(log(Var)~Group+Seq+Seq*Group+Subj%in%(Seq*Group)+Per%in%Group+Trt, data=data) # like in 2 stage design
  }
 
  # in the standard contrasts option "contr.treatment"
  # the coefficient TrtT is the difference T-R since TrtR is set to zero
  lnPE     <- coef(muddle)["TrtT"]   lnCI     <- confint(muddle,c("TrtT"), level=1-2*alpha)
  typeI    <- anova(muddle)
 
  #--- Type III sum of squares like SAS
  typeIII  <- drop1(muddle,test="F")
  typeIII$AIC <- NULL # null out this collumn
  # typeIII 3rd column is RSS - residual SS of model fit without term
  # we make Mean Sq in that column
  names(typeIII)[3] <- "Mean Sq"
  typeIII[3]     <- typeIII["Sum of Sq"]/typeIII["Df"]   # seq entry has originally 0 df
  # must equalize names else rbind() dosn't function
  names(typeIII)[2] <- "Sum Sq";
  names(typeI)[5]   <- "Pr(>F)"
  names(typeIII)[5] <- "Pr(>F)"
  if(!Group) {
    typeIII    <- rbind(typeIII[2:3,], # tmt, period
                        typeI["Seq",],
                        typeIII["Subj",],
                        typeI["Residuals",])
    # correct the sequence test, denominator is subject ms
    MSden <- typeIII["Subj","Mean Sq"]     df2   <- typeIII["Subj","Df"]     fval  <- typeIII["Seq","Mean Sq"]/MSden
    df1   <- typeIII["Seq","Df"]     typeIII["Seq",4] <- fval
    typeIII["Seq",5] <- 1-pf(fval, df1, df2)
  } else {
    typeIII    <- rbind(typeIII[2:3,], # tmt, period
                        typeI["Group",],
                        typeI["Seq",],
                        typeI["Group:Seq",],
                        typeIII["Group:Seq:Subj",],
                        typeI["Residuals",])
    # correct the sequence test, denominator is subject ms
    MSden <- typeIII["Group:Seq:Subj","Mean Sq"]     df2   <- typeIII["Group:Seq:Subj","Df"]    
    fval_Seq  <- typeIII["Seq","Mean Sq"]/MSden
    df1_Seq   <- typeIII["Seq","Df"]   
    typeIII["Seq",4] <- fval_Seq
    typeIII["Seq",5] <- 1-pf(fval_Seq, df1_Seq, df2)   

    fval_Group  <- typeIII["Group","Mean Sq"]/MSden   
    df1_Group   <- typeIII["Group","Df"]     typeIII["Group",4] <- fval_Group
    typeIII["Group",5] <- 1-pf(fval_Group, df1_Group, df2)   
   
    fval_Group_Seq  <- typeIII["Group:Seq","Mean Sq"]/MSden     
    df1_Group_Seq   <- typeIII["Group:Seq","Df"]     typeIII["Group:Seq",4] <- fval_Group_Seq
    typeIII["Group:Seq",5] <- 1-pf(fval_Group_Seq, df1_Group_Seq, df2)     

  }
 
  attr(typeIII,"heading")<- attr(typeIII,"heading")[2:3] # suppress "single term deletion"
  attr(typeIII,"heading")[1] <- "(mimicing SAS/SPSS but with Seq tested against Subj)\n"
  # no need for the next
  mse      <- summary(muddle)$sigma^2
 
  # back transformation to the original domain
  CV <- 100*sqrt(exp(mse)-1)
  PE <- exp(lnPE)
  CI <- exp(lnCI)
  # output
 
  options(digits=8)
  cat("Type I sum of squares: ")
  print(typeI)
  cat("\nType III sum of squares: ")
  print(typeIII)
  cat("\nBack-transformed PE and ",100*(1-2*alpha))
  cat("% confidence interval\n")
  cat("CV (%) ..................................:",
      formatC(CV, format="f", digits=2),"\n")
  cat("Point estimate (GMR).(%).................:",
      formatC(100*PE, format="f", digits=2),"\n")
  cat("Lower confidence limit.(%)...............:",
      formatC(100*CI[1], format="f", digits=2) ,"\n")
  cat("Upper confidence limit.(%)...............:",
      formatC(100*CI[2],format="f", digits=2) ,"\n")
 
  #reset options
  options(ow)
}

data <- read.delim("https://static-content.springer.com/esm/art%3A10.1208%2Fs12248-014-9661-0/MediaObjects/12248_2014_9661_MOESM1_ESM.txt")
Group1 <- subset(data, Subj <= 9)
Group1$Group <- 1
Group2 <- subset(data, Subj > 9)
Group2$Group <- 2
data<-rbind(Group1, Group2)

Analyse222BE(data, alpha=0.05, Group = FALSE)
Analyse222BE(data, alpha=0.05, Group = TRUE)


Now Group factor seems to be more suitable.
What bothers me is that Group effect is not equal in R and PHX, cannot figure out the reason

Kind regards,
Mittyri
ElMaestro
Hero

Denmark,
2016-10-10 20:10

@ mittyri
Posting: # 16711
Views: 5,835
 

 Fixed effects model: changing the F, p values

Hi there,

» What bothers me is that Group effect is not equal in R and PHX, cannot figure out the reason

Type III SS for Group may be somewhat challenging because is a between-factor like Sequence. I am OoO so cannot tests it well, but I reckon you'd need to play around with models without subject and sequence, in order to get something for Group.

This post is untested and just reflecting my thoughts. It may not in anyway reflect realities at all (like most of my post in general).

Actually if some of the R experts out there see this post, then I am hereby suggesting you to write a general extension to drop1 which produces meaningful (that means informative, if anyone wonders) SS's and F-tests for models where there are multiple betweens.
The ideal candidate for this drop1 extension would -purely hypothetically- be a statistical expert with mahatma status who has plenty spare time because he recently went into retirement. But of course, such people are hard to find.

Hötzi, do you think any such person exists on the planet or is it just wishful thinking?!??

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
Helmut
Hero
Homepage
Vienna, Austria,
2016-10-11 00:17

@ ElMaestro
Posting: # 16712
Views: 5,830
 

 Fixed effects model: changing the F, p values

Hi ElMaestro,

» I am OoO so cannot tests it well, …

So am I.

» … a statistical expert with mahatma status who has plenty spare time because he recently went into retirement.

And is away from home for the next two weeks enjoying delicious “Flens”.

» Hötzi, do you think any such person exists on the planet or is it just wishful thinking?!??

Sounds like a Radio Yerewan question to me. A: “In principle yes, but…”

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2016-10-11 13:17
(edited by d_labes on 2016-10-11 13:30)

@ ElMaestro
Posting: # 16722
Views: 5,770
 

 Holy War of type III

Gentlemen (& Ladies),

» The ideal candidate for this drop1 extension would -purely hypothetically- be a statistical expert with mahatma status who has plenty spare time because he recently went into retirement. But of course, such people are hard to find.
»
» Hötzi, do you think any such person exists on the planet or is it just wishful thinking?!??

I think a person with such skills couldn't be found within the whole galaxy. May be far, far away in the outer space :-D.

If I would be such a person I nevertheless would by no means delve into the mud of Type I, II, III or even type IV tests in ANOVA :no:.
There is some sort of Holy War out there with regard to such tests when it comes to unbalanced designs. This war is raging especially between the R folks and SAS adepts, the latter claimed to have invented "Type III" as I was told. Therefore it has to be seen as evil and must be abandoned by the R hordes under the virtual leadership of Douglas Bates.

No person with a lucid mind or even with an expert or mahatma status would participate in such a hassle. Not for money (even much!) nor for good words. Especially if considering that these F-Tests and pvalue Kuddelmuddel are nothing good for within the context of BE, only being a description of nuisance terms in the ANOVA model.

Better drink a good beer or a good glas of vine :party: while contemplating about the meaning of life and that all.

Regards,

Detlew
mittyri
Senior

Russia,
2016-10-11 15:07
(edited by mittyri on 2016-10-11 17:02)

@ d_labes
Posting: # 16724
Views: 5,725
 

 Significance of Group effect in Russia: why the type III is so 'important'

Dear Detlew,

Thank you very much, it clarifies a lot the R's model logic.
I would explain the current situation in Russia and why we are so interested in ANOVA typeIII.
Russian experts read your post with a letter from Barbara Davit
Now in ALL cases when the Protocol does not state the splitting to the groups, they are requesting the analysis with a group term, if the study was conducted in groups (irrespective of the reasons and time)
As you know there is an algo in FDA guidance, if the Group term is significant, one cannot pool them and should proceed an analysis in one of the groups. (almost) End of Story.
Many protocols state that all effects should be fixed (according to EMA GL)!
So I tried to modify your code to get the significance level of group effect and I've found that I cannot reproduce ANOVA typeIII PHX results in R (using FDA approach and all-fixed-effects approach)

» Better drink a good beer or a good glas of vine :party: while contemplating about the meaning of life and that all.
Agree :-D

Kind regards,
Mittyri
zizou
Junior

Plzeň, Czech Republic,
2016-10-11 01:06

@ mittyri
Posting: # 16713
Views: 5,830
 

 Fixed effects model: changing the F, p values

Dear mittyri,
for the results consistency (R versus Phoenix as you reported), there should be one more term of marvelous effect x) ... you posted it as number 7: (I resorted it for comparison of F and p values, note: int and Trt are missing in the results from R code)
»   Dependent Hypothesis Numer_DF Denom_DF        F_stat     P_value
» 5   Ln(Var)  Group*Per        2       14 3.2110481e+00 0.071153125
» 7   Ln(Var)  Trt*Group        1       14 2.3942317e-01 0.632200219
» 2   Ln(Var)      Group        1       14 9.9615453e-01 0.335181899
» 3   Ln(Var)        Seq        1       14 9.7094813e-01 0.341167964
» 4   Ln(Var)  Group*Seq        1       14 5.7364354e-02 0.814182229
» 1   Ln(Var)        int        1       14 3.1047428e+03 0.000000000
» 6   Ln(Var)        Trt        1       14 3.4404496e+00 0.084798088


If you change the muddle:
muddle <- lm(log(Var)~Group+Seq+Seq*Group+Subj%in%(Seq)+Per%in%Group+Trt+Group*Trt, data=data)
and run the mentioned code... Voilà!
Type III sum of squares:
(mimicing SAS/SPSS but with Seq tested against Subj)

log(Var) ~ Group + Seq + Seq * Group + Subj %in% (Seq * Group) + ...
               Df  Sum Sq   Mean Sq  F value     Pr(>F)   
Group:Per       2 0.04514 0.0225681  3.21105   0.071153 . 
Group:Trt       1 0.00168 0.0016827  0.23942   0.632200   
Group           1 0.22546 0.2254640  0.79946   0.386368   
Seq             1 0.27383 0.2738263  0.97095   0.341168   
Group:Seq       1 0.01618 0.0161779  0.05736   0.814182   
Group:Seq:Subj 14 3.94827 0.2820195 40.12641 7.5802e-09 ***
Residuals      14 0.09840 0.0070283                       
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The comparison is telling us that with groups it will be little bit more complicated.

» Now Group factor seems to be more suitable.
» What bothers me is that Group effect is not equal in R and PHX, cannot figure out the reason
Good luck!
zizou
mittyri
Senior

Russia,
2016-10-11 13:43
(edited by mittyri on 2016-10-11 16:53)

@ zizou
Posting: # 16723
Views: 5,801
 

 FDA group model in R

Dear zizou,

Thank you for suggestion!
The FDA group model for PHX is described by Helmut here:
fixed:  Group+Sequence+Sequence(Group)+Period(Group)+Treatment+Treatment×Group
random: Subject(Sequence×Group)


I think in this case PHX easily switches from GLM to MIXED procedure and does not produce SS tables.
We all know that the GLM procedure should be used for conventional FDA model. Could we use the same procedure (GLM) here?

The relationship between GLM/MIXED is described very well in 'must-read' post by ElMaestro

I executed the code with unequal sequences (removing Subj18 from the reference dataset):
PHX Sequential tests:
  Dependent Hypothesis Numer_DF Denom_DF        F_stat     P_value
1   Ln(Var)        int        1       13 2830.51538137 0.000000000
2   Ln(Var)      Group        1       13    1.01359006 0.332416756
3   Ln(Var)        Seq        1       13    0.69165428 0.420616500
4   Ln(Var)  Group*Seq        1       13    0.12190952 0.732563224
5   Ln(Var)  Group*Per        2       13    2.99449184 0.085186463
6   Ln(Var)        Trt        1       13    3.42927592 0.086890932
7   Ln(Var)  Trt*Group        1       13    0.34285051 0.568214408


PHX Partial tests:
  Dependent Hypothesis Numer_DF Denom_DF        F_stat     P_value
1   Ln(Var)        int        1       13 2803.64675463 0.000000000
2   Ln(Var)      Group        1       13    1.14134985 0.304804606
3   Ln(Var)        Seq        1       13    0.65955233 0.431338627
4   Ln(Var)  Group*Seq        1       13    0.12190952 0.732563224
5   Ln(Var)  Group*Per        2       13    3.18289277 0.074972760
6   Ln(Var)        Trt        1       13    3.53470601 0.082691107
7   Ln(Var)  Trt*Group        1       13    0.34285051 0.568214408



R:
muddle <- lm(log(Var)~Group+Seq+Seq*Group+Subj%in%(Seq)+Per%in%Group+Trt+Group*Trt, data=data)
anova(muddle)

              Df  Sum Sq   Mean Sq  F value     Pr(>F)   
Group           1 0.30128 0.3012782 40.66580 2.4299e-05 ***
Seq             1 0.20559 0.2055864 27.74955 0.00015211 ***
Trt             1 0.02101 0.0210135  2.83635 0.11599640   
Group:Seq       1 0.03624 0.0362362  4.89108 0.04551863 * 
Group:Per       2 0.04876 0.0243815  3.29096 0.06975471 . 
Group:Trt       1 0.00254 0.0025401  0.34285 0.56821441   
Group:Seq:Subj 13 3.86410 0.2972387 40.12055 2.5513e-08 ***
Residuals      13 0.09631 0.0074086


drop1(muddle,test="F")
Single term deletions
Model:
log(Var) ~ Group + Seq + Seq * Group + Subj %in% (Seq * Group) +
    Per %in% Group + Trt + Group * Trt
               Df Sum of Sq     RSS       AIC  F value     Pr(>F)   
<none>                      0.09631 -157.4617                       
Group:Per       2   0.04716 0.14347 -147.9107  3.18289   0.074973 . 
Group:Trt       1   0.00254 0.09885 -158.5766  0.34285   0.568214   
Group:Seq:Subj 13   3.86410 3.96042  -57.1004 40.12055 2.5513e-08 ***             

and CI's are far from equality...:-(

PS: I tried to switch to the mixed model
library(nlme)
Group222FDA<- function(data, alpha=0.05)
{
  data$Per  <- factor(data$Per)  # Period
  data$Seq  <- factor(data$Seq)  # Sequence
  data$Trt  <- factor(data$Trt)  # Treatment
  data$Subj <- factor(data$Subj) # Subject
  data$Group  <- factor(data$Group)
 
  muddlelme <- lme(log(Var) ~ Group + Seq + Seq %in% Group
                   + Per %in% Group + Trt +  Trt * Group ,
                   random=~1|Subj, data=data, method="REML")
  lnPE     <- coef(muddlelme)[1, "TrtT"]   lnCI     <- confint(muddlelme,c("TrtT"), level=1-2*alpha)[[1]][1:2]   PE <- exp(lnPE)
  CI <- exp(lnCI)
  typeI <- anova(muddlelme)
  # output
  options(digits=8)
  cat("Type I sum of squares: \n")
  print(typeI)
  cat("\nBack-transformed PE and ",100*(1-2*alpha))
  cat("% confidence interval\n")
  cat("Point estimate (GMR).(%).................:",
      formatC(100*PE, format="f", digits=2),"\n")
  cat("Lower confidence limit.(%)...............:",
      formatC(100*CI[1], format="f", digits=2) ,"\n")
  cat("Upper confidence limit.(%)...............:",
      formatC(100*CI[2],format="f", digits=2) ,"\n")
}
data <- read.delim("https://static-content.springer.com/esm/art%3A10.1208%2Fs12248-014-9661-0/MediaObjects/12248_2014_9661_MOESM1_ESM.txt")

Group1 <- subset(data, Subj <= 9)
Group1$Group <- 1
Group2 <- subset(data, Subj > 9& Subj < 18)
Group2$Group <- 2
data<-rbind(Group1, Group2)
Group222FDA(data, alpha=0.05)

The ANOVA type I is the same as in PHX, but PE and CIs are not. I'm in stuck....

Kind regards,
Mittyri
ElMaestro
Hero

Denmark,
2016-10-12 01:07

@ mittyri
Posting: # 16725
Views: 5,699
 

 FDA group model in R

Hi all,

be careful:

?lm

"The terms in the formula will be re-ordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on".

So don't compare directly with any sequential anova from software that doesn't do exactly that.

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
VStus
Regular

Poland,
2016-10-17 15:32

@ mittyri
Posting: # 16730
Views: 5,488
 

 FDA group model in R

Dear mittyri,
Dear zizou,

» The ANOVA type I is the same as in PHX, but PE and CIs are not. I'm in stuck....

Thank you very much for your help!

I have checked proposed model formula in R's lm() versus SAS output of some old study report with unbalanced dataset.
I found perfect much here for group*treatment!

1. Analysis started as FDA Group Model 1:
> anovadata <- lm(log(data2$Cmax)~group+seq+seq*group+subj%in%(seq)+prd%in%group+drug+drug*group, data=data2, na.action=na.exclude)
> anova(anovadata)
Analysis of Variance Table

Response: log(data2$Cmax)
           Df Sum Sq Mean Sq F value Pr(>F)   
group       1   0.12   0.120    1.23  0.272   
seq         1   0.05   0.046    0.47  0.497   
drug        1   0.18   0.182    1.87  0.177   
group:seq   1   0.46   0.460    4.71  0.034 * 
seq:subj   59  25.25   0.428    4.38  3e-08 ***
group:prd   2   0.33   0.164    1.68  0.195   
group:drug  1   0.01   0.007    0.07  0.797   
Residuals  59   5.76   0.098

> drop1(anovadata, test="F")
Single term deletions

Model:
log(data2$Cmax) ~ group + seq + seq * group + subj %in% (seq) +
    prd %in% group + drug + drug * group
           Df Sum of Sq   RSS  AIC F value Pr(>F)   
<none>                   5.76 -255                   
group:seq   0      0.00  5.76 -255                   
seq:subj   59     25.25 31.01 -161    4.38  3e-08 ***
group:prd   2      0.32  6.09 -252    1.64    0.2   
group:drug  1      0.01  5.77 -256    0.07    0.8             


F and P values for group:drug are exactly the same as obtained by SAS:

Param   Source        DF  F Value   P-value      Model to be used:
lCmax   Group*Treat   1   0.07   0.7966   N.S.   Remove interaction


2. I've got same F and P values (at least for some of the effects) while proceeding to FDA Group Model 2 (or EMA's only model with all effects fixed), not mentioning exactly the same outputs for MSE/ Intra-subject CV and PE / 90%CIs :

> anovadata2 <- lm(log(data2$Cmax)~group+seq+seq*group+subj%in%(seq)+prd%in%group+drug, data=data2, na.action=na.exclude)
> drop1(anovadata2, test="F")
Single term deletions

Model:
log(data2$Cmax) ~ group + seq + seq * group + subj %in% (seq) +
    prd %in% group + drug
          Df Sum of Sq   RSS  AIC F value  Pr(>F)   
<none>                  5.77 -256                   
drug       1      0.21  5.98 -254    2.15    0.15   
group:seq  0      0.00  5.77 -256                   
seq:subj  59     25.25 31.02 -163    4.45 1.8e-08 ***
group:prd  2      0.33  6.10 -254    1.71    0.19


SAS output:
Param   Effect          DF   F Value   P-value
lCmax   Group           1    0.21      0.6475
lCmax   Seq             1    0.31      0.5822
lCmax   Group*Seq       1    1.08      0.3040
lCmax   Treat           1    2.15      0.1476
lCmax   Per(Group)      2    1.71      0.1896
lCmax   Subj(Group*Seq) 59   4.45      <.0001


Maybe lm() sometimes doing exactly the same as PROC GLM?
Regards, VStus
ElMaestro
Hero

Denmark,
2016-10-17 18:58

@ VStus
Posting: # 16732
Views: 5,458
 

 FDA group model in R

Hi VStus,

» Maybe lm() sometimes doing exactly the same as PROC GLM?

I actually lost a bit track of this discussion and I am not exactly sure what the comparisons try to achieve?!?
However, lm() and PROC GLM are related but not the same and they do not necessarily achieve exactly the same result. I am very sure that their behavour corresponds to the specification, and I have good reason to think their output is correct cf. those specifications. I do not know how SAS handles crosses and interactions, but I know that anova on lm() in R puts main effects first.
Drop1 is a good example of a source of differences in terms of results. Drop1 and type III is implemented differently in R and SAS. You can argue both are right, certainly, even though they are not identical results. Drop1 in R involves single term deletions which is how type III is defined, but that may give a zero SS (or SS decrease) for sequence. In SAS type III involves deletion of more than just single terms for between-factors in our BE studies so that sequence is not a zero SS effect.

So, before you start comparing lm() (with or without anova or drop1) and SAS, perhaps you can formulate with your own words what you want calculated and how, when there is room for interpretation such as is the case with type I/III and lm()/drop1 versus PROC GLM.:-)

I could be wrong, but…


Best regards,
ElMaestro

- since June 2017 having an affair with the bootstrap.
VStus
Regular

Poland,
2016-10-20 13:57

@ ElMaestro
Posting: # 16745
Views: 5,276
 

 FDA group model in R

Dear Maestro,

» I actually lost a bit track of this discussion and I am not exactly sure what the comparisons try to achieve?!?

I'm sorry for confusion. I accept that I for sure may understood this topic completely wrong. I just wanted to address Myttiry's issue of non-similar results between R's lm() and WiNonlin in assessment of the FDA group model 1 and 2 for unbalanced dataset.

I don't have access to SAS or WiNonlin, but I have some study reports assessed via SAS using FDA's group model 1 and 2. So, in regards to Point Estimates, 90% CIs, F and P values for:
FDA Group 1 Group*Treat #Treat:Group in my R output
FDA Group 2
- Treat #Drug in my R output
- Per(Group) # group:prd in my R output
- Subj(Group*Seq) # seq:subj in R
lm() provides same results as PROC GLM

If Russian Authorities require an additional assessment using a group model, R output should be ok. So as PHX, if PE and 90%CI are not calculated using FDA Group Model 1.

Regards,
VStus
VStus
Regular

Poland,
2016-10-20 13:54

@ mittyri
Posting: # 16744
Views: 5,269
 

 Fixed effects model with Group term

Dear mittyri,

Getting a little back to possible nightmare for BE studies assessed in Russia:

» It could be a nightmare for Russia, where the experts insist on the Group term in the model, but all other things should be like in EMA Guideline with all factors as fixed (does someone realize how many studies will be failed? The experts suggest not to pool the groups in this case like the FDA Guidance states!)

Why do you proceed to calculation of Point Estimate and Confidence Intervals using FDA Group Model 1 (those with Treatment*Group interaction) if it's not seems to be intended for anything more than assessment of statistical significance of this particular effect?

I've just gone through some historical studies (with groups) done at well-established CROs before EMA Q&A document regarding statistical model for TSDs became available. They assessed studies using FDA recommendations. Model 1 was used only to calculate significance of Treatment*Group interaction and nothing more! No MSE, PE and 90%CI print-outs for this model. After confirmation that effect is not significant, data was processed under FDA Group Model 2 with bioequivalence assessment.

Why not use the same strategy here? F and P seems to be equal between R and PNX, as well as between R and SAS PROC GLM.
[image]

PS: FDA Group Model 1 produces weird results in regards to Point Estimate (if we get it directly from ANOVA table), potentially making a good trial useless. In one of examples PE traveled from 96% to 88% (sic!) for data where geometric means ratio was almost equal to LSMeans ratio.

Regards, VStus
zizou
Junior

Plzeň, Czech Republic,
2016-12-31 02:54

@ ElMaestro
Posting: # 16917
Views: 3,935
 

 Group effects FDA/EMA

Hi everybody and nobody!

I am pointing to the 2x2 crossover studies in groups due to logistic reasons (capacity of clinical unit, etc.).

Many discussions about groups were done here but recently I was asked to calculate effect of groups in one failed study (study acc. to EMA guidelines, in protocol/report without group effect testing - two groups within the week). (But no problem when "everything possible" is analysed in a failed study with getting more informations for sponsor's next decision.)

Nevertheless there is nothing stated from EMA on testing of such groups (or did I miss something?).
For FDA there is CDER or known letter mentioned many times dealing with groups*treatments. So everyone can follow it to analyse the model for FDA.
For EMA groups are not intended to testing. ?

I have opinion that group effect is not important in 2x2 crossover with proper planning/realization. (Moreover I have never seen group testing for EMA and I like the sentence "With one week between groups I would never ever thought a milli­second of setting up a group model." from this post.)

Following model (from the post I replay to) sounds right but only sounds or is?
» » 2. What kind of statistical model is preffered for Europe ..
» » Something like that:
» » Fixed: sequence, period, treatment, group, subject(sequence)
»
» Sounds right to me, and I've had thumbs up in sc.advices.

Actually groups and sequences are nested in Subjects so there is no difference in results of BE from the standard model without groups (difference can be only in sequence effect and of course "in new" effects Groups and in Subjects(Groups*Sequences), so in between-subject variability also).

Moreover optional statement as: "If Groups differ significantly, they can't be pooled and BE must be calculated in one group only." smells fishy to me (I heard this bad idea somewhere, maybe misunderstanding due to inaccuracy).

Group effect in such model has one property similar as period effect (not only that sometimes it happens significant). Group effect as well as Period effect is not able affect BE decision (in this simple model).
You know if you calculate data from one period by 1000 nothing change in BE decision, only period significance (+intercept+total) is different. No change at Formulations, Subjects, Error. Not problem unless the study is extremely imbalanced.
In the same way Group effect in such defined model is not able affect BE decision - if you multiply data from one group by 1000, there is no change in Formulations individual ratios, at ANOVA effects Formulations, Periods, Sequence and MSE. The difference is only in ANOVA group effect (will be significant) + Subjects (due to group is nested within the subjects).
I checked that property in the next example. (The same size of groups and no dropouts is not required for this property I think, even though in my testing example below it is so. Multiplication by 1000 will be always used for same number of T and R values even in case of dropouts or whatever - sure imbalancing would change sequence effect, but not BE decision I expect.)


I prepared example (no real data ... to get "unlucky" results).
data=read.delim("http://tj-prazdroj.lesyco.cz/download/groups.txt")

Firstly I calculated classical GLM model without groups. (according to EMA)
Effects = Formulations, Periods, Sequences, Subjects(Sequences).

Results:
CVintra = 27.4%
PE (LL,UL) = 1.0127 (0.9236,1.1103)
Great results in 80-125%, I can congratulate myself! x)

No change after adding effect Groups to the model:
Effects = Formulations, Periods, Sequences, Groups, Subjects(Sequences*Groups).

Of course everybody and nobody get the same results as above - due to Sequences and Groups are nested in Subjects, i.e. only effects Formulations, Periods, Subjects (as sum of between-subjects effects) take role for intra-subject CV, GMR of T/R and CI.

The effect Groups seems to be not significant as per ANOVA table (p=0.1230537).
So it sounds good, everyone can write statement about Groups are not significant and data could be pooled (when using this simple model with Groups).

The Phantom Menace for this example is to use another model. (according to FDA)
Effects = Formulations, Periods(Groups), Sequences, Sequences*Groups, Subjects(Sequences*Groups), Groups, Formulations*Groups.
(GLM with random statement for Subjects was used)

Results:
CVintra = 25.7%
PE (LL,UL) = 1.0127 (0.9284,1.1045)

But with significant Formulations*Groups interaction (p=0.008066805 < 0.1). There is recommendation to not to pool the data and calculate BE separately in one of the groups (if this model planned for the decision of pool or not to pool).
Because the data are written by me randomly and then changing and changing to get the nice and not nice results at the same. The groups separately provide us with next BE results:

Group 1:
data1=subset(data,data[,"Groups"]==1)
CVintra = 30.9%
PE (LL,UL) = 0.8774 (0.7554,1.0191)

Group 2:
data2=subset(data,data[,"Groups"]==2)
CVintra = 19.4%
PE (LL,UL) = 1.1689 (1.0626,1.2858)

I only want to make more clear and more visibile what can be hidden behind the "no significant group effect".

If group testing was required by EMA I would prefer test for group*treatment interaction. But I am still convinced that group effect is not required to test in standard 2x2 crossover study. For me it seems similar to have 2x2 crossover study on 24 subjects in 1 group and test for Period*Treatment interaction (e.g. in model Period*Treatment, Sequence, Subjects(Sequence) - if Period*Treatment significant, use data from one period as in parallel design. It will be certainly significant if ratio T/R (i.e. ratio with means of different subjects) in period 1 will be around 1.25 and T/R in period 2 will be 0.80 (not probable to have opposite (reciprocal) ratio, but periods are often more separated in time than groups and who knows after several months in case of long washout).

Best regards,
zizou

Note: All calculations were done using GLM (in R and/or SPSS).
Helmut
Hero
Homepage
Vienna, Austria,
2017-06-03 14:57

@ zizou
Posting: # 17442
Views: 945
 

 Group effects FDA/EMA

Hi zizou,

sorry for reviving this old thread. See also this lengthy one.

» […] there is nothing stated from EMA on testing of such groups (or did I miss something?).

I don’t think so. I know only one case (biosimilar in 2016) where the statistician (“borrowed” from the MHRA) required the FDA’s model 2.

» I have opinion that group effect is not important in 2x2 crossover with proper planning/realization.

Agree – especially with this one.

» Moreover I have never seen group testing for EMA and I like the sentence "With one week between groups I would never ever thought a milli­second of setting up a group model." from this post.

Agree.

» I prepared example (no real data ... to get "unlucky" results).

THX! I could reproduce your results with my R-code. :-D

» Because the data are written by me randomly and then changing and changing to get the nice and not nice results at the same. The groups separately provide us with next BE results:
»
» Group 1:
» data1=subset(data,data[,"Groups"]==1)
» CVintra = 30.9%
» PE (LL,UL) = 0.8774 (0.7554,1.0191)
»
» Group 2:
» data2=subset(data,data[,"Groups"]==2)
» CVintra = 19.4%
» PE (LL,UL) = 1.1689 (1.0626,1.2858)

Nice example. My current thinking is: The decision scheme recommended by the FDA leads to nowhere. If there is a significant G×T interaction we are not allowed to pool. So far so good. If groups are not of the same size, everybody would present the results of the largest group evaluated by the conventional model. But: What if groups have equal sizes (like in your example)? Cherry-pick and present the “better” one? I bet the assessor would ask for the other one as well. If I would be a regulator I would require that both pass or – if not – make a conservative decision: fail.
Furthermore, if there is a true G×T interaction the treatment effect might be biased. To which extent is unknown (cannot be estimated from the data). It might well be that a smaller group is closer to the true treatment effect than a larger one. We simply don’t know. IMHO, the assumption that “size matters” is false.
Your example is even more telling in another respect. Not only the PEs are different but also the variances. Should we blindly pool them? I know, the common model assumes equal variances anyhow, but…
I have just limited data (always tried to avoid equal group sizes). These are the results of my data sets which show a significant G×T interaction in model 1 and I evaluated the equally sized largest groups by model 3:

AUC
drug group  n df      mse CV(%)
 a     1    9  7 0.006026  7.77
 a     2    9  7 0.005799  7.63
 b     1   12 10 0.014851 12.23
 b     2   12 10 0.006714  8.21
 c     1    9  7 0.001881  4.34
 c     2    9  7 0.001462  3.82
Cmax
drug group  n df      mse CV(%)
 b     1   12 10 0.036411 19.26
 b     2   12 10 0.026247 16.31

Am I worried about different variances of groups (AUC, drug b)? Nope, p 0.6232. Drugs a & c (with extremely low CVs) were NTIDs and the PI wanted groups for safety reasons.

» If group testing was required by EMA I would prefer test for group*treatment interaction.

Disagree. What if you find one? Expected in 10% of studies by pure chance. Then you end up with the story from above.

» But I am still convinced that group effect is not required to test in standard 2x2 crossover study. For me it seems similar to have 2x2 crossover study on 24 subjects in 1 group and test for Period*Treatment interaction (e.g. in model Period*Treatment, Sequence, Subjects(Sequence) - if Period*Treatment significant, use data from one period as in parallel design.

Agree. Grizzle’s flawed testing for a sequence- (or better: unequal carryover-) effect… Cannot be handled statistically and only avoided by design.

My current thinking for the EMA (maybe I’m wrong): Avoid a potential G×T interaction by design (i.e., comply with the FDA’s conditions for pooling, keep the interval between groups short, if possible use the staggered approach).
  • State in the SAP that “it is a source of variation that cannot be reasonably assumed to have an effect on the response variable”. Pool the data and evaluate the study by the common 2×2 model.
  • Essentially the FDA is correct that the multigroup nature of the study should be taken into account – if (!) groups are separated by a long interval (months). If you want to go this way (remember: I have seen only a single case where the EMA asked for it), state it in the SAP. No pre-test of the G×T interaction by model 1! Pool the data and evaluate the study by the FDA’s model 2. The loss in power compared to the common 2×2 model is small.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
hiren379
Regular

India,
2013-08-22 14:32

@ Helmut
Posting: # 11336
Views: 11,367
 

 2 Groups model FDA

Hi HS

» Followed by:
» If the Group-by-treatment interaction test is not statistically significant (p ≥0.1), only the Group-by-treatment term can be dropped from the model.
» … provided that the group meets minimum requirements for a complete bioequivalence study.

According to the above statement, what should be the minimum requirements of the group for a complete bioequivalence study?
Helmut
Hero
Homepage
Vienna, Austria,
2013-08-23 11:53

@ hiren379
Posting: # 11338
Views: 11,373
 

 Homework

Hi Hiren,

» » … provided that the group meets minimum requirements for a complete bioequivalence study.
»
» According to the above statement, what should be the minimum requirements of the group for a complete bioequivalence study?

Please do your homework first.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Back to the forum Activity
 Thread view
Bioequivalence and Bioavailability Forum | Admin contact
17,007 Posts in 3,644 Threads, 1,043 registered users;
15 users online (0 registered, 15 guests).

We absolutely must leave room for doubt
or there is no progress and no learning.
There is no learning without having to pose a question.
And a question requires doubt.
People search for certainty.
But there is no certainty.    Richard Feynman

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
XHTML/CSS RSS Feed