Thanks Helmut]]>

Hi Helmut!

Thank you for this great explanation and comparison! I didn't test Metida vs Phoenix, only vs SPSS and SAS. And as I know CSH and FA(2) are completely identical for data with 2 levels in random model (in this case G have 2x2 dim). And I think the difference in results with Phoenix for CSH and FA(2) is the issue (unfortunately it can't be submitted on GitHub). Also if we look at SPSS/SAS documentation we can find that rho optimized in '-1

In this situation maybe it is a good choice - never use Heterogenous Compound Symmetry in Phoenix.

If we see at point estimate and variance values for Metida and Phoenix with FA(0) - they are identical. But the minimal difference may be in DF estimation. I observe differences for DF in all software - between SPSS, SAS and Phoenix, sometimes it depends on data (especially if the covariance matrix is ill-conditioned). Maybe the cause is different approaches for derivating REML function, but it is only an assumption.

So also I can say that spatial covariance was added to Metida, and interface to any custom structure. Now I try to work on bootstrapping and multiple imputations for Metida. Maybe some other packages can be interesting:

* ODMXMLTools.jl - experimental package for ODM-XML

* MetidaNCA.jl - Julia NCA package]]>

Hi Achievwin,

No idea (too lazy to search).

Not sure whether it will work. The partial replicate is a nasty beast (for potential problems see there).

IIRC, in the late 1990s concerns about the Subject-by-Formulation interaction emerged. Temporarily studies had to be performed in a replicated design. After assessing the results, the FDA concluded that the S × F interaction is practically not relevant and the temporary requirement was lifted.

Furthermore, none of the simulations performed by various authors or by the FDA which lead to RSABE contained an S × F-interaction term in the model.]]>

Thanks for the clarification

I vaguely recall when you don't have subject by formulation interaction in a replicate study design (4-period full replicate design???) you can request switchability (Metadate ANDA I guess)

My question is can we assess this subject by formulation interaction in a 3-period partial replicate design?) what additional statistical test one needs for documenting this?]]>

Hi Achievwin,

population BE (prescribability) and individual BE (switchability) are of historic interest only. Both are not assessed in any of the currently recommended approaches (average BE, reference-scaled BE, average BE with expanding limits).

Define ‘appropriate’. An appetizer.]]>

Question for study design experts.

If we want to introduce assessment or claim using BE study which design is more appropriate? 3-way cross over partial study or 4-way cross full replicate design. Please share your experience.

Sincerely,

Achiewin

Edit: Category changed; see also this post #1 --> 16205. [Helmut]]]>

Please help me to shortout the query on priority

Your valuable response is highly appreciable....

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Hi Helmut,

I got it..!

Thanks for the interaction and time :ok:]]>

Hi pharm07,

In your second script you forgot to state

`theta0 = 0.95`

. Hence, the default `theta = 0.975`

was employed.`library(PowerTOST)`

balance <- function(n, n.seq) {

# Round up to obtain balanced sequences

return(as.integer(n.seq * (n %/% n.seq + as.logical(n %% n.seq))))

}

nadj <- function(n, do.rate, n.seq) {

# Round up to compensate for anticipated dropout-rate

return(as.integer(balance(n / (1 - do.rate), n.seq)))

}

CV <- c(0.045, 0.07) # First element CVwT, second CVwR

do.rate <- 0.30 # Anticipated dropout-rate 30%

n <- sampleN.NTID(CV = CV, theta0 = 0.95, targetpower = 0.90,

print = FALSE, details = FALSE)[["Sample size"]]

dosed <- nadj(n, do.rate, 2) # Adjust the sample size

df <- data.frame(dosed = dosed, eligible = dosed:(n - 2))

for (j in 1:nrow(df)) {

df$dropouts[j] <- sprintf("%.1f%%", 100 * (1 - df$eligible[j] / df$dosed[j]))

df$power[j] <- suppressMessages(

power.NTID(CV = CV, theta0 = 0.95, n = df$eligible[j]))

}

print(df, row.names = FALSE)

dosed eligible dropouts power

146 146 0.0% 0.97046

146 145 0.7% 0.96815

146 144 1.4% 0.96887

146 143 2.1% 0.96730

146 142 2.7% 0.96592

146 141 3.4% 0.96512

146 140 4.1% 0.96470

146 139 4.8% 0.96316

146 138 5.5% 0.96322

146 137 6.2% 0.96172

146 136 6.8% 0.96048

146 135 7.5% 0.95969

146 134 8.2% 0.95892

146 133 8.9% 0.95649

146 132 9.6% 0.95497

146 131 10.3% 0.95505

146 130 11.0% 0.95347

146 129 11.6% 0.95245

146 128 12.3% 0.95101

146 127 13.0% 0.94969

146 126 13.7% 0.94851

146 125 14.4% 0.94723

146 124 15.1% 0.94594

146 123 15.8% 0.94386

146 122 16.4% 0.94258

146 121 17.1% 0.94121

146 120 17.8% 0.94039

146 119 18.5% 0.93931

146 118 19.2% 0.93649

146 117 19.9% 0.93461

146 116 20.5% 0.93393

146 115 21.2% 0.93164

146 114 21.9% 0.92946

146 113 22.6% 0.92741

146 112 23.3% 0.92519

146 111 24.0% 0.92361

146 110 24.7% 0.92132

146 109 25.3% 0.91851

146 108 26.0% 0.91725

146 107 26.7% 0.91563

146 106 27.4% 0.91393

146 105 28.1% 0.91186

146 104 28.8% 0.90848

146 103 29.5% 0.90711

146 102 30.1% 0.90354

146 101 30.8% 0.90250

146 100 31.5% 0.89851

]]>Hi Sereng,

For many years we have a running gag in the forum calling the boss*

- Who is only proficient in Powerpoint, copypasting from one document to an other, and shouting
*‘You are fired!’*if a study fails.

Hi Sereng,

Nope. Pasted from my previous post --> 22982:

`Sample size (total)`

n power

750 0.800246

`PowerTOST`

give always the total sample. If you are interested in the background, see this article.]]>Many thanks!]]>

Many thanks!]]>

Hi Helmut, pardon my ignorance but I believe you are stating n=750 per group in this parallel group study (total n=1500) using the assumptions from the completed study? Correct? Thanks!]]>

HI ElMaestro, many thanks for the response. However, I missed the joke about "the guy in the penguin costume". I should have listed to my parents when the implored me to get a good liberal arts education!]]>

Hi Helmut,

`theta1`

and `theta2`

, for the FDA keep the defaults of `0.8`

and `1.25`

(OK. Noted.

OK. I referred these examples.

Ok, I have pasted one example which i seems to be work upon.

`theta0`

. OK.

Please see below example,

Note : CV is not in scalar form.'

`sampleN.NTID(CV = c(0.045,0.07), theta0 = 0.95, targetpower = 0.9)`

+++++++++++ FDA method for NTIDs ++++++++++++

Sample size estimation

---------------------------------------------

Study design: 2x2x4 (TRTR|RTRT)

log-transformed data (multiplicative model)

1e+05 studies for each step simulated.

alpha = 0.05, target power = 0.9

**CVw(T) = 0.045, CVw(R) = 0.07**

True ratio = 0.95

ABE limits = 0.8 ... 1.25

Implied scABEL = 0.9290 ... 1.0764

Regulatory settings: FDA

- Regulatory const. = 1.053605

- 'CVcap' = 0.2142

Sample size search

n power

92 0.875810

94 0.882090

96 0.888810

98 0.894060

100 0.898510

102 0.903540

# with 30% DO rate,

# as CV was specified as Vector,

`balance <- function(n, n.seq) {`

# Round up to obtain balanced sequences

return(as.integer(n.seq * (n %/% n.seq + as.logical(n %% n.seq))))

}

nadj <- function(n, do.rate, n.seq) {

# Round up to compensate for anticipated dropout-rate

return(as.integer(balance(n / (1 - do.rate), n.seq)))

}

CV <- 0.045 **# Assumed CV** # how to specify vector CV here?

do.rate <- 0.30 # Anticipated dropout-rate 30%

n <- sampleN.NTID(CV = CV, print = FALSE, details = FALSE)[["Sample size"]]

dosed <- nadj(n, do.rate, 2) # Adjust the sample size

df <- data.frame(dosed = dosed, eligible = dosed:(n - 2))

for (j in 1:nrow(df)) {

df$dropouts[j] <- sprintf("%.1f%%", 100 * (1 - df$eligible[j] / df$dosed[j]))

df$power[j] <- suppressMessages( # We know that some are unbalanced

power.NTID(CV = CV, n = df$eligible[j]))

}

print(df, row.names = FALSE)

dosed eligible dropouts power

58 58 0.0% 0.92040

58 57 1.7% 0.91575

58 56 3.4% 0.91254

58 55 5.2% 0.90722

58 54 6.9% 0.90364

58 53 8.6% 0.89761

58 52 10.3% 0.89257

58 51 12.1% 0.88767

58 50 13.8% 0.88215

58 49 15.5% 0.87672

58 48 17.2% 0.86952

58 47 19.0% 0.86376

58 46 20.7% 0.85690

58 45 22.4% 0.84922

58 44 24.1% 0.84310

58 43 25.9% 0.83486

58 42 27.6% 0.82810

58 41 29.3% 0.81915

58 40 31.0% 0.81109

58 39 32.8% 0.80105

58 38 34.5% 0.79410

Kindly guide me with this example, i want to check whether i am making a mistake or not.:confused:]]>

Hi Sereng,

Out of curiosity:

According to the FDA’s guidance (Section IV.B.1.d.):

For parallel designs, the confidence interval for the difference of means in the log scale can be computed using the total between-subject variance.^{1} […] equal variances should not be assumed.

(my emphasis)

Though you had equally sized groups, seemingly variances were not equal. This calls for the Welch-test with Satterthwaite’s approximation of the degrees of freedom:$$\nu\approx\frac{\left(\frac{{s_{1}}^{2}}{n_1}+\frac{{s_{2}}^{2}}{n_2}\right)^2}{\frac{{s_{1}}^{4}}{n{_{1}}^{2}(n_1-1)}+\frac{{s_{2}}^{4}}{n{_{2}}^{2}(n_2-1)}}$$ In ,

**SAS**

, and other software packages it is the default.- Using a pretest (
*F*-test, Levene’s test, Bartlett’s test, Brown–Forsythe test) – as recommended in the past – is bad practice because it will inflate the Type I Error.^{2}

- If \({s_{1}}^{2}={s_{2}}^{2}\;\wedge\;n_1=n_2\), the formula given above reduces to the simple \(\nu=n_1+n_2-2\) anyhow.

- In all other cases the Welch-test is conservative, which is a desirable property.

**SPSS**

- Misleading terminology. There is no ‘total between-subject variance’. In a parallel design only the
*total*variance – which is*pooled*from the between- and within-subject variances – is accessible.

- Zimmermann DW.
*A note on preliminary tests of equality of variances*. Br J Math Stat Psychol. 2004; 57(1): 173–81. doi:10.1348/000711004849222.

:clap:

Algebra rules.]]>

Dear All

It would be great if someone can put some light on dose linearity for Hydrocortisone acetate. The information available in the literature is very confusing with few suggesting it's linear between the dose up to 40mg and then few say nonlinear due to absorption and protein binding and show less than dose-proportional increase in AUC.

In case we have to plan this study for the EU/UK do we need to do a study on higher and lower strength both?

Cheers

HG]]>

Hi Vishal,

the forum is not a gentlemen’s club. No interested in madams’ opinions?

See this post #3 --> 16205. Did you miss in the menu bar?

What about that one for