yes I am looking to see the equivalence of 3 different treatments to the reference.]]>

Hi Ihababdallah,

See the subject line. If

What do you want to demonstrate? Equivalence of the treatments to the reference and superiority of all to placebo or what?]]>

Hi Helmut,

I have question related to this topic, if a study design is 5x5 cross over trial, reference, 3 treatments, and placebo, and I would like to estimate the sample size ?]]>

Dear Marta!

Pls find the recent presentation by FDA on - From General Q1/Q2 Inquiries to Supporting Complex Excipient Sameness. It can be downloaded from - https://sbiaevents.com/files2/Innov-2020-Day-1.zip and see- D1S14-Kozak. It will be useful for BCS based biowaiver approach as well.

Regards,

Dshah]]>

Hi ElMastro,

… it is recommended that a BE estimate for the new CI of the T/R ratio be submitted to meet the extended bounds.

Not quite. :-D

Just before the mentioned text it read:

… no confirmation is provided that variability in the reference drug exists and is not caused by emissions; please provide confirmation in the form of an emissions estimate.

We guessed that… in case of a value of 30% < CV_{RR} <45%, it is recommended to check the control of the patient's risk type I error at the level of 5%. If an alpha adjustment is necessary, it is recommended that a BE estimate for the new CI of the T/R ratio be submitted to meet the extended bounds.

That subjunction is correct but not part of the everyday linguistic toolbox of those who do not speak a lot of English. My guess is MHRA or the Irish Medicines Board or it comes from someone who spent a lot of time taking English language classes :-)]]>

Dear Helmut!

… in case of a value of 30% < CV_{RR} <45%, it is recommended to check the control of the patient's risk type I error at the level of 5%. If an alpha adjustment is necessary, it is recommended that a BE estimate for the new CI of the T/R ratio be submitted to meet the extended bounds.

Can't say only: Wow! :clap:]]>

Dear all,

we know for a good while that under certain conditions the type I error might be inflated. However, seemingly European assessors were either not aware of it or ignored it. Last week I saw a deficiency letter (don’t ask for the country):

… in case of a value of 30% < CV_{RR} <45%, it is recommended to check the control of the patient's risk type I error at the level of 5%. If an alpha adjustment is necessary, it is recommended that a BE estimate for the new CI of the T/R ratio be submitted to meet the extended bounds.

Hi Achievwin,

The CV and T/R-ratios are

I recommend the package

`PowerTOST`

. For the implemented designs see here and there. You can also run scripts in the browser (see this post --> 21006).If you are dealing with a higher-order design, I recommend the “Two at a Time” approach instead of “All at Once” (pooled ANOVA). See the vignette. That means to estimate the sample size of the study like for a 2×2×2 design.

In a parallel design you get only the total (pooled) CV.

For the intra- (and inter-) subject CV you need a crossover.

If you want sumfink in M$ Excel, consider FARTSSIE which estimates the sample size based on the noncentral

`library(PowerTOST)`

res <- data.frame(method = c("exact", "noncentral", "central"))

for (j in 1:nrow(res)) {

res$n[j] <- sampleN.TOST(CV = 0.22, theta0 = 0.95, targetpower = 0.8,

method = res$method[j], details = FALSE,

print = FALSE)[["Sample size"]]

}

print(res, row.names = FALSE)

method n

exact 22

noncentral 22

central 24

Some SAS code based on the noncentral

Sure – if you know also the sample size and design. For the underlying algebra see this presentation (slides 26–30). Implemented in

`PowerTOST`

’s functions `CVfromCI()`

/ `CI2CV()`

. See also the vignette. Example:`library(PowerTOST)`

signif(CVfromCI(lower = 0.9800, upper = 1.1257, design = "2x2x4", n = c(62, 63)), 4)

# [1] 0.497

- Hauschke D, Steinijans VW, Diletti E, Burke M.
*Sample Size Determination for Bioequivalence Assessment Using a Multiplicative Model.*J Pharmacokinet Biopharm. 1992; 20(5): 557–61. doi:10.1007/BF01061471.

- Jones B, Kenward MG.
*Design and Analysis of Cross-Over Trials.*Boca Raton: Chapman & Hall, CRC Press; 3^{rd}edition 2015.

Dear mittyri,

The methods implemented in the R package PK are based on methods published in peer reviewed journals such as Non-compartmental estimation of pharmacokinetic parameters in serial sampling designs or here Non-compartmental estimation of pharmacokinetic parameters for flexible sampling designs based on log-transformation of individual values to estimate lambda_z.

The rationale for log-transforming the indivdual values is that based on this approach the variance-covariance matrix can account for values both used for derving the AUC0-t as well lambda_z used to estimate the AUC from t to infinity.

I am not aware of a publication which justifies calculation of lambda_z in case of sparse sampling based on means only as implemented in PHX.

In addition, attention should be paid to handling to values

best regards & hope this helps

martin

PS.: I would like to use the opportunity to illustrate how important adequate handling of BLQ values are by using a theoretical example. Consider a serial sampling design (N=5 animals per time point) where all but one value is BLQ at the last time point and think about estimation of t1/2. Ignoring BLQ values at the last time point for 4 out of 5 animals will lead to a overestimated population t1/2 as the last time point is just driven by a single animal. The same is when you set BLQ values to zero as estimation of t1/2 requires some log-transformation and log of 0 is not defined and is therefore equivalent to omitting those BLQ values.]]>

Hello:

share tools (R-, SAS code and excel spreadsheet) for computing Sample size from ISCV for Parallel, 2x2, 3x3 and 4x4 BE study designs) also if we can compute ISCV or ANOVA CV from the confidence intervals. I truly appreciate your help.

Regards,

Achievwin.]]>

Dear All,

some time ago I discovered the difference between PHX and 'PK' package in R regarding lambda_z calculation for sparse dataset analysis.

My question is which one looks more reasonable for you

- PHX gets means of concs and then logtransforms it to prepare for regression analysis

- PK gets logs and then calculates means for that logs to prepare for regression analysis

what do you think?]]>

Dear all,

As I mentioned previously that

`install.packages("gWidgetsRGtk2.zip",repos=NULL)`

. It worked. So to zip all Dear all bear users,

To make it quick, please do not request for

`gWidgets.zip`

& `gWidgetsRGtk2.zip`

from me any more (see this). I have already received tons of requests since these two packages (gWidgets & gWidgetsRGtk2) have been orphaned/removed/retired/depredicated from R repositories. I know some users were not trying to install bear. They only need to install gWidgets & gWidgetsRGtk2 to run other packages. Google master pointed this Forum to them. Therefore, users can download preinst.r from SourceForge and run it Here is the long story. Since it was reported that gWidgets & gWidgetsRGtk2 has been removed from R repositories (see this, and this and this), the disaster began. I did a little stats. about what R packages was affetced/removed together with gWIdgets & gWidgetsRGtk2. One of packages, RQDA, was also included and users tried seek out the solution for it. I found that the only problem was that

`only gWidgetsRGtk2 cannot be built from its source tarball on my Windows 10 (x64) only.(three `**onlys** in this sentence.)

Both gWidgets & gWidgetsRGtk2 can be built on macOS and Linux-PC from their source. Even gWidgets can be built on Windows 10 (x64). The error messages [in English] of gWidgetsRGtk2 on Windows 10 were `... Error in inDL(x, as.logical(local), as.logical(now), ...) :`

unable to load shared object

'C:/.../library/RGtk2/libs/i386/RGtk2.dll':

LoadLibrary failure: The specified module could not be found.

That meant R tried to build both x86 and x64 packages at the same time as default, even though for x64 platforms/machine. Please note that the dynamic lib. for i386 (.../library/RGtk2/libs/i386/

`... Error in inDL(x, as.logical(local), as.logical(now), ...) :`

unable to load shared object

'R:/Apps/R/R-2.13.0/library/RGtk2/libs/i386/RGtk2.dll':

LoadLibrary failure: Das angegebene Modul wurde nicht gefunden.

About this issue, Professor Ripley said:

`"...It is Microsoft's error message, not ours."`

Then I planed to email Mr. Gates. Before doing that, I found a nice post about installation of a R package (tidyselect) on stackoverflow that totally inspired me. Therefore, the solution is as follows:`this<-"https://cran.r-project.org/src/contrib/Archive/gWidgets/gWidgets_0.0-54.2.tar.gz"`

install.packages(this,repos=NULL,**INSTALL_opts="--no-multiarch**")

and.this<- "https://cran.r-project.org/src/contrib/Archive/gWidgetsRGtk2/gWidgetsRGtk2_0.0-86.1.tar.gz"

install.packages(and.this,repos=NULL,**INSTALL_opts="--no-multiarch**")

And it finally works. The key option of

[to be continued...]]]>

Dear all,

Alfredo García-Arieta notified me that in v1.2 there is no need any more for the workaround to get a fixed seed. Don’t from the top-folder at SourceForge. That’s still v1.1 of October 2018. Instead

- navigate to Files

- click on and then

- download bundle_bootf2BCA_v1.2.zip of May 2019.

At the end of

`report.txt`

the seed is given for reproducibility.]]>Hi Pharma88,

in order to avoid surprises I recommend to perform a sensitivity analysis

`PowerTOST`

).In order to assess the impact of deviations from assumptions on power try this:

`library(PowerTOST)`

CV <- 0.25 # assumed CV

theta0 <- 0.95 # assumed T/R-ratio

target <- 0.80 # target (desired) power

design <- "2x2" # any one given in known.designs()

# default BE limits: theta1 = 0.80, theta2 = 1.25

x <- pa.ABE(CV = CV, theta0 = theta0,

targetpower = target, design = design)

plot(x, pct = FALSE, ratiolabel = "theta0")

However, this is not the end of the story since potential deviations occur simultaneously. That’s a four-dimensional problem (power depends on theta0, CV, and n). A quick & dirty -script at end.

The lower right quadrants of each panel show “nice” combinations (T/R-ratio > assumed and CV < assumed). Higher power than desired, great.

The other combinations are tricky. Since power is most sensitive to the T/R-ratio, it would need a substantially lower CV to compensate for a worse T/R-ratio. Have a look at the 0.80 contour lines in the lower left quadrant of the first panel (no dropouts). Say, the T/R-ratio is just 0.92. Then with any CV > 0.2069 power will be below our target.

On the other hand, “better” T/R-ratios allow for higher CVs. That’s shown in the upper right quadrants. However, if the CV gets too large, even a T/R-ratio of 1 gives not the target power.

In the upper left quadrants are the worst case combinations (T/R-ratio < assumed and CV > assumed). It might still be possible to show BE though with a lower chance (power < 0.80).

Like in the Power Analysis above we see that dropouts don’t hurt that much.

Note that – since power curves are symmetrical in log-scale – you get the same power for \(\small{\theta_0}\) and \(\small{1/\theta_0}\).

With

`sensitivity(CV = CV, do.rate = do.rate, theta0.lo = 0.8)`

But again, this should be be done before the study.

If you demonstrated BE with a

There is simple intuition behind results like these: If my car made it to the top of the hill, then it is powerful enough to climb that hill; if it didn’t, then it obviously isn’t powerful enough. Retrospective power is an obvious answer to a rather uninteresting question. A more meaningful question is to ask whether the car is powerful enough to climb a particular hill never climbed before; or whether a different car can climb that new hill. Such questions are prospective, not retrospective.

— Russell V. Lenth, Two Sample-Size Practices that I Don’t Recommend.

`library(PowerTOST)`

sensitivity <- function(alpha = 0.05, CV, CV.lo, CV.hi, theta0 = 0.95,

theta0.lo, do.rate, target = 0.8, design = "2x2",

theta1, theta2, mesh = 25) {

# alpha = 0.5 for assessing only the PE (Health Canada: Cmax)

if (alpha <= 0 | alpha > 0.5)

stop("alpha ", alpha, " does not make sense.")

if (missing(CV))

stop("CV must be given.")

if (missing(CV.lo))

CV.lo <- CV * 0.8

if (missing(CV.hi))

CV.hi <- CV * 1.2

if (theta0 >= 1)

stop("theta0 >=1 not implemented.")

if (missing(theta1) & missing(theta2))

theta1 <- 0.8

if (!missing(theta1) & missing(theta2))

theta2 <- 1 / theta1

if (missing(theta1) & !missing(theta2))

theta1 <- 1 / theta2

if (missing(theta0.lo))

theta0.lo <- theta0 * 0.95

if (theta0.lo < theta1) {

message("theta0.lo ", theta0.lo, "< theta1 does not make sense. ",

"Changed to theta1.")

theta0.lo <- theta1

}

if (missing(do.rate))

stop("do.rate must be given.")

if (do.rate < 0)

stop("do.rate", do.rate, " does not make sense.")

if (target <= 0.5)

stop("Target ", target, " does not make sense. Toss a coin instead.")

if (target >= 1)

stop("Target ", target, " does not make sense.")

d.no <- PowerTOST:::.design.no(design)

if (is.na(d.no))

stop("design '", design, "' unknown.")

if (mesh <= 10) {

message("Too wide mesh is imprecise. Increased to 25.")

mesh <- 25

}

CVs <- seq(CV.lo, CV.hi, length.out = mesh)

theta0s <- seq(theta0.lo, 1, length.out = mesh)

# Sample size based on assumptions

n <- sampleN.TOST(alpha = alpha, CV = CV, theta0 = theta0,

theta1 = theta1, theta2 = theta2,

targetpower = target, design = design,

details = FALSE)[["Sample size"]]

ns <- n:floor(n*(1-do.rate))

windows(width = 6.5, height = 6.5)

fig.col <- ceiling(sqrt(length(ns)))

fig.row <- ceiling(length(ns)/fig.col)

figs <- c(fig.col, fig.row)

op <- par(no.readonly = TRUE)

par(mar = c(3.5, 4, 0.2, 0.3))

split.screen(figs)

for (j in seq_along(ns)) {

# THX to Benno Pütz for the next line!

power <- outer(theta0s, CVs, function(x, y)

suppressMessages(power.TOST(CV = y, theta0 = x,

alpha = alpha,

design = design,

n = ns[j])))

pwr <- suppressMessages(power.TOST(CV = CV, theta0 = theta0,

alpha = alpha,

design = design,

n = ns[j]))

screen(j)

plot(c(theta0.lo, 1), c(CV.lo, CV.hi), type = "n", las = 1,

xlab = "", ylab = "", cex.axis = 0.9)

axis(1, at = theta0, labels = FALSE)

axis(2, at = CV, labels = FALSE)

if (j %%figs[2] == 1) { # y-label first columns

mtext("CV", side = 2, line = 3)

}

if (j > prod(figs)-figs[2]) { # x-label last row

mtext(expression(theta[0]), side = 1, line = 2.25)

}

grid(); box()

nl <- length(pretty(power, 20))

clr <- sapply(hcl.pals(type = "sequential"),

hcl.colors, n = nl, rev = TRUE)[, "ag_Sunset"]

contour(theta0s, CVs, power, col = clr, nlevels = nl, labcex = 0.8,

labels = sprintf("%.2f", pretty(power, 20)), add = TRUE)

points(theta0, CV, cex = 1, pch = 21, col = "blue", bg = "#87CEFA")

TeachingDemos::shadowtext(theta0, CV, col = "blue", bg = "white",

labels = paste0(signif(pwr, 3),

" (n ", ns[j], ")"),

r = 0.25, adj = c(0.5, 1.6), cex = 0.8)

}

close.screen(all = TRUE)

par(op)

}

#################################################

# Specification of the study (mandatory values) #

#################################################

CV <- 0.25 # assumed CV

do.rate <- 0.10 # anticipated dropout-rate (10%)

#################################################

# defaults (if not provided in named arguments) #

# alpha = 0.05 common #

# CV.lo = CV*0.80 best case #

# CV.hi = CV*1.20 worst case #

# theta0 = 0.95 assumed T/R-ratio #

# theta0.lo = theta0*0.95 worst case #

# target = 0.80 target power #

# theta1 = 0.80 lower BE limit #

# theta2 = 1.25 upper BE limit #

# design = "2x2" in known.designs() #

# mesh = 25 resolution #

#################################################

sensitivity(CV = CV, do.rate = do.rate)

Hi researcher101,

Shit happens :cool:

Yes, the only reason this question is relevant up is that Sponsors have not begun to adhere to the policy of pharmacokinetic solidarity :-D

Here's how I would approach it:

1. Act according to the protocol.

2. If nothing is stated in the protocol about this situation, act in accordance with SOPs.

3. If nothing in stated in SOPs about this situation, let the PI (and none other than the PI!) judge the case, decide and document the basis for her/his decision.

Judging the case for example means the PI (un-coercedly) should decide if inclusion data arising from the subject in question helps fulfill the purpose of the trial.

Oftentimes, if you are doing BE with two active treatments with just two periods, then exclusion of one period for a subject means that subject is entirely lost for the purposes of stats.

You did not mention where you are submitting the dossier, but national guidelines may apply as well.]]>

Dear Researcher!

It is better to exclude the subject from study.

For XR- your PI shall confirm to exclude participant on clinical ground.

Regards,

DShah]]>

Dear All, I'm asking if I have a study on product containing two actives (one IR and the other extended release) and a participant had a diarrhea after Tmax of the IR active and through the Tmax of the XR active. should I exclude the participant from the study? or include him in the XR active statistical calculation and exclude in the IR? :confused:]]>

Hi Pharma88,

See this post --> 21782 and scroll down to formula

Nothing useful.

`PowerTOST`

.You planned the study based on

In other words, if you plan studies for 80% power, one out of five will fail by

Simple example in :

`library(PowerTOST)`

set.seed(123456)

CV <- 0.25 # assumed CV

theta0 <- 0.95 # assumed T/R-ratio

do.rate <- 0.10 # anticipated dropout rate (10%)

# defaults: targetpower 0.80, design = "2x2"

studies <- 20

N <- sampleN.TOST(CV = CV, theta0 = theta0, details = FALSE,

print = FALSE)[["Sample size"]]

res <- data.frame(study = c("as planned", 1:studies),

CV = c(CV, rnorm(mean = CV, n = studies, sd = 0.05)),

theta0 = c(theta0, rnorm(mean = theta0, n = studies, sd = 0.05)),

n = c(N, round(runif(n = studies, min = N*(1-do.rate), max = N))),

CL.lower = NA, CL.upper = NA, BE = "fail", assessment = NA,

power = NA)

for (j in 1:nrow(res)) {

res[j, 5:6] <- round(100*CI.BE(CV = res$CV[j], pe = res$theta0[j],

n = res$n[j]), 2)

if (res$CL.lower[j] >= 80 & res$CL.upper[j] <= 125) {

res$BE[j] <- "pass"

res$assessment[j] <- "equivalent"

} else {

if (res$CL.lower[j] > 125 | res$CL.upper[j] < 80) {

res$assessment[j] <- "inequivalent"

} else {

res$assessment[j] <- "indecisive"

}

}

res$power[j] <- suppressMessages(

power.TOST(CV = res$CV[j], theta0 = res$theta0[j],

n = res$n[j]))

}

res[, c(2:3, 9)] <- signif(res[, c(2:3, 9)], 4)

txt <- paste(sprintf("%1.0f%%", 100*length(which(res$BE == "fail"))/studies),

"of actual studies failed\n")

cat(txt); print(res, row.names = FALSE)

`25% of actual studies failed`

study CV theta0 n CL.lower CL.upper BE assessment power

as planned 0.2500 0.9500 28 84.91 106.28 pass equivalent 0.8074

1 0.2917 0.9123 26 79.66 104.48 fail indecisive 0.4729

2 0.2362 1.0130 25 90.47 113.39 pass equivalent 0.8923

3 0.2322 0.9519 28 85.75 105.68 pass equivalent 0.8647

4 0.2544 0.9595 28 85.60 107.55 pass equivalent 0.8274

5 0.3626 0.9731 27 82.64 114.59 pass equivalent 0.4513

6 0.2917 0.9286 26 81.09 106.35 pass equivalent 0.5496

7 0.3156 0.9508 26 82.15 110.05 pass equivalent 0.5534

8 0.3751 0.9852 27 83.23 116.63 pass equivalent 0.4154

9 0.3084 0.9986 27 86.80 114.88 pass equivalent 0.6819

10 0.2287 0.9190 26 82.56 102.29 pass equivalent 0.6927

11 0.2002 0.9072 26 82.58 99.67 pass equivalent 0.7181

12 0.1943 0.9535 26 87.02 104.47 pass equivalent 0.9386

13 0.2472 0.8977 26 79.97 100.77 fail indecisive 0.5040

14 0.3087 0.8126 26 70.42 93.76 fail indecisive 0.0712

15 0.3027 0.8935 26 77.64 102.83 fail indecisive 0.3582

16 0.2529 0.9069 25 80.38 102.33 pass equivalent 0.5300

17 0.2132 1.0280 28 93.38 113.17 pass equivalent 0.9547

18 0.2965 1.0010 27 87.44 114.54 pass equivalent 0.7285

19 0.3334 1.0020 26 85.91 116.91 pass equivalent 0.5542

20 0.2780 0.8942 26 78.56 101.78 fail indecisive 0.4108

When you increase the number of simulated studies you will sooner or later end up with 20% failing.]]>