Dear All

AS per new rules (GSR 227(E) dated 19 march 2019 released by CDSCO, can independent ethics committee gives the approval for clinical trial studies:confused: who do not have their own ethics committee within the city or 50 km radius.

What should be the criteria for interval of meetings in this case or as per sop.

Your views will be highly appreciated

Regards

Bharat

Edit: See also this post #3 --> 16205. Official Gazette (English text about EC starts here). [Helmut]]]>

Hi sschivu,

See this post --> 17774. The current version is 1.0.5.9000 (2017-11-25).

You can also click Clone or download ▼ Download ZIP and then install in

Though we validated the code with reference data sets against SAS and Phoenix/WinNonlin it is still a work in progress. Use in a productive environment is at your own risk.]]>

Hi Helmut,

Request you to send partial replicate code to my Personal email ID.

Regards,

sschivu

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [mittyri]]]>

Dear Osama,

Here I only could totally agree with you. Especially with the last statement concerning the BEBA forum..

For the beginning of your statement I'm not sure if regulatory persons full-fill the requirements you state very often. But thats another strory ...]]>

Hi Helmut,

To perform true sensitivity analysis did you use not only TRUE/FALSE randomization lists but also other possible random sequences too?;-)]]>

Hi Libaiyi,

may I ask if you did a root cause analysis and if you did, were you able to assign a root cause to this case?

The RC determines the CAPA. In the absence of a really good RC here, the CAPA could be quite impossible. If nothing is done, it could happen again, so obviously something's gotta be changed here.]]>

Hi libaiyi,

Is this related to this post --> 19600 of yours? That’s not a statistical issue but a serious violation of GCP. Of course, you could use the actual sequences in a sensitivity analysis. Even if both the per-protocol analysis and the sensitivity analysis pass, it would rise serious doubts about the procedures of the CRO. There were too many cases in the past (see this post --> 10704) which would – rightly – set any assessor’s alarm bells off and likely trigger an inspection. Get prepared.

I faced not following the randomization once. The CRO ignored the randomization provided by the sponsor (well,

I was also responsible to evaluate the safety part. When I got the CRFs from the CRO I realized that the drug administration didn’t match the randomization I had. F**k! Sensitivity analysis with the “true” randomization as an amendment to the statistical report. Study passed.

However, after some legal to and fro the CRO repeated the study at its own cost and never was contracted by the sponsor again.]]>

Dear All,

We know that the model statement in SAS for the mixed effects model to evaluate bioequivalence is

`model logpkp =trtseqan aperiod trtan / ...`

, in which the planned sequence is used. In a 2*2 BE study, if subject A with planned sequence T-R is assigned to R-T and B with planned R-T has the actual sequence of T-R， what code can we use for the model statement with these two subjects included for a sensitivity analysis?

Thanks in advance!

Edit: Category changed; see also this post #1 --> 16205. [Helmut]]]>

For a Oral solid dosage form [tablet and suspension], FDA dissolution database is recommending dissolution to be done at 25 degrees whereas, product specific FDA guidance for the same product is mentioning 37 degrees.

Which temperature should i need to adapt and follow for generic development ?]]>

Hi Helmut,

Thank you for this logic explanation. I think as a regulatory person you must have a very vast academic scientific background in different fields beside to check every morning this forum otherwise you will lose.

Have a nice day,

Osama]]>

Hi mittyri,

Welcome to the club! BTW, THX for implementing the lin-up/log-down trapezoidal. :-D

Yep. Actually this story reaches too far (for an IR formulation crossing flip-flop PK; no regulator would buy that regardless what is written in a guideline) and not far enough: Setting the cut-off for pAUC at the individual t

I think that limiting k

For the cut-off 2×t

Edit: No wonder you hated this line of your code. Shouldn’t it be:

`SubjectsDFstack <-`

reshape(SubjectsDF[, -c(2,3,4,6,7,9,11)],

direction = 'long', varying = c(3:5), v.names = "ratio",

timevar = "metric", times = names(SubjectsDF)[c(7,9,11)])

My sim’s: In the input section

`t.cut <- 2*log(ka/(CL/Vd)/(ka-(CL/Vd)))`

Then (relevant lines only):

`AbsorptionDF <- function(D, ka, Vd, CL,t,ratio,t.cut){`

# Reference

ke <- CL/Vd

C <- C.sd(D=D, Vd=Vd, ka=ka, ke=ke, t=t)

tmax <- t[C == max(C)][1]

Cmax <- C.sd(D=D, Vd=Vd, ka=ka, ke=ke, t=tmax)

AUC.t <- AUCcalc(t, C)

t.1 <- t[which(t <= t.cut)]

C.1 <- C[which(t <= t.cut)]

pAUC <- AUCcalc(t.1, C.1)

Cmax.AUC <- Cmax/AUC.t

DF.sub <- cbind(Subject = isub, V = Vd.sub, CL = CL.sub,

AbsorptionDF(D, ka.sub, Vd.sub, CL.sub, t, ratio, t.cut))

sp1 <- ggplot(SubjectsDFstack[SubjectsDFstack$metric == "Cmax", ],

aes(x=kaT_kaR, y=ratio, color=factor(metric)))

sp1 + theme_bw() +

geom_point(size=.3) +

geom_smooth(method = 'loess', se = FALSE) +

stat_density_2d(data = SubjectsDFstack[SubjectsDFstack$metric == "Cmax", ],

geom = "raster", aes(alpha = ..density..), fill = "#F8766D",

contour = FALSE) +

scale_alpha(range = c(0, 0.7)) +

scale_x_continuous(trans='log2') +

scale_y_continuous(limits=c(0.5,2), trans='log2')

sp2 <- ggplot(SubjectsDFstack[SubjectsDFstack$metric == "pAUC", ],

aes(x=kaT_kaR, y=ratio, color=factor(metric)))

sp2 + theme_bw() +

geom_point(size=.3) +

geom_smooth(method = 'loess', se = FALSE) +

stat_density_2d(data = SubjectsDFstack[SubjectsDFstack$metric == "pAUC", ],

geom = "raster", aes(alpha = ..density..), fill = "#6DAAF8",

contour = FALSE) +

scale_alpha(range = c(0, 0.7)) +

scale_x_continuous(trans='log2') +

scale_y_continuous(limits=c(0.5,2), trans='log2')

sp3 <- ggplot(SubjectsDFstack[SubjectsDFstack$metric == "Cmax_AUC", ],

aes(x=kaT_kaR, y=ratio, color=factor(metric)))

sp3 + theme_bw() +

geom_point(size=.3) +

geom_smooth(method = 'loess', se = FALSE) +

stat_density_2d(data = SubjectsDFstack[SubjectsDFstack$metric == "Cmax_AUC", ],

geom = "raster", aes(alpha = ..density..), fill = "#6DF876",

contour = FALSE) +

scale_alpha(range = c(0, 0.7)) +

scale_x_continuous(trans='log2') +

scale_y_continuous(limits=c(0.5,2), trans='log2')

Based on loess:

` metric kaT_kaR predicted sensitivity`

Cmax_AUC 0.5 0.8082842 0.54181715

Cmax 0.5 0.8075275 0.54298163

pAUC 0.5 0.7143879 0.76337666

pAUC 2.0 1.2181904 0.08875421

Cmax_AUC 2.0 1.2012353 0.10671423

Cmax 2.0 1.2028693 0.10745301

Again, pAUC is the one-eyed leading the blind ones but only if k

Hi Helmut,

thank you so much for comrehensive answer!

Hi mittyri,

This was the standard approach till the mid 1990s.

The idea behind was clinical relevance. An example often discussed at that time was amitriptyline (t

Tucker even argued that in a linear system any compound with the lowest variability (parent, active or inactive metabolite) could be chosen. Hence, for a while he was called Geoff

Already at the Bio-International in 1994 the pendulum started to swing towards the approach we are now bound to.

BTW, proceedings of the Bio-International conferences make still a great read and help in understanding

Maybe you can get the first ones used. If you find the third covering the Bio-International 1996 in Yokohama somewhere let me know. I lost mine…

I corrected a typo in your original post from

`SubjectsDFstack <-`

```
reshape(SubjectsDF[, -c(2,3,4,6,7,9,11)],
```

direction = 'long', varying = 3:5, v.names = "ratio", timevar = "metric", times = names(SubjectsDF**1**)[3:5])

to

`SubjectsDFstack <-`

reshape(SubjectsDF[, -c(2,3,4,6,7,9,11)],

direction = 'long', varying = 3:5, v.names = "ratio", timevar = "metric", times = names(SubjectsDF)[3:5])

So what do you conclude?

*Importance of Metabolites in Assessment of Bioequivalence.*In: Midha KK, Blume HH, editors.*Bio-International. Bioavailability, Bioequivalence and Pharmacokinetics.*Stuttgart; medpharm: 1993. p. 147–208.

- Blume HH, Midha KK.
*Bio-International ’92, Conference on Bioavailability, Bioequivalence and Pharmacokinetic Studies.*Pharm Res.1993;10(12):1806–11. doi:10.1023/A:1018998803920.

- Tucker GT.
*Bioequivalence – A Measure of Therapeutic Equivalence?*In: Blume HH, Midha KK, editors.*Bio-International 2. Bioavailability, Bioequivalence and Pharmacokinetic Studies.*Stuttgart; medpharm: 1995. p. 35–43.

- Welling PG.
*Bioequivalence – A Measure of Quality Control?*In: Blume HH, Midha KK, editors.*Bio-International 2. Bioavailability, Bioequivalence and Pharmacokinetic Studies.*Stuttgart; medpharm: 1995. p. 45–49.

Hi Helmut,

simple linear regression, only consider points from, let's say, 15% of Cmax to time point of Cmax-1 (i.e. excluding Cmax from regression). Would need data from 100 different compounds real-life data for the start...]]>

Hi Sury,

first of all, please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! We edited 80% of your replies. Hence, this is the first warning.

No, you got it wrong. You can assume something but

Are you kidding? You will administer a drug to healthy volunteers or patients which always carries some risk. Don’t play games.

Almost everywhere.

Sorry, you got me wrong. The Forum’s Policy states “We expect a

I strongly recommend to read textbooks on the topic. See this post --> 615 (always get the latest editions). #1, #3, #11 are good entry points. For sample size estimation additionally:

- Julious SA.
*Sample Sizes for Clinical Trials.*Boca Raton; Chapman & Hall/CRC; 2010.

Hello

I mean to say is in normal bio equivalence studies, we need the ISCV information in order to estimate the sample size. Does this same criteria is applicable for the standard deviation for Non-inferiority Trails too?. but i got my answer in the above explanation..:-)

And in addition to that i have one more doubt...

By the above explanation, we dont need any sort of information regarding the drug nature or literature support or pilot studies for the estimation of the sample size?

In general BE studies, we require the pilot study or literature support for the sample size estimation (ISCV) for the pivotal studies.

As we are assuming the margin of error, Standard deviation and the power (Which are required) for the non inferiority trials

Correct me if i am wrong anywhere

BTW, Thanks for your reply. it made me clear about the concept on Non-inferiority trials.:-)

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Hi Helmut,

Could you please explain a little bit? When did I miss that good old times?

ready for simulation:

`library(ggplot2)`

# input paraemeters

Nsub <- 1000 # number of subjects to simulate

D <- 400

ka <- 1.39 # 1/h

ka.omega <- 0.1

Vd <- 1 # L

Vd.omega <- 0.2

CL <- 0.347 # L/h

CL.omega <- 0.15

t<- c(seq(0, 1, 0.25), seq(2,6,1), 8,10,12,16,24) # some realistic sequence

ratio <- 2^(seq(-3,3,0.2)) # ratios of ka(T)/ka/R)

# helper functions

C.sd <- function(F=1, D, Vd, ka, ke, t) {

if (!identical(ka, ke)) { # common case ka != ke

C <- F*D/Vd*(ka/(ka - ke))*(exp(-ke*t) - exp(-ka*t))

} else { # equal input & output

C <- F*D/Vd*ke*t*exp(-ke*t)

}

return(C)

}

AUCcalc <- function(t,C){

linlogflag <- C[-length(C)] <= C[-1]

AUCsegments <- ifelse(linlogflag,

diff(t)*(C[-1]+C[-length(C)])/2,

(C[-length(C)] - C[-1])*diff(t)/(log(C[-length(C)]) - log(C[-1])))

return(sum(AUCsegments))

}

AbsorptionDF <- function(D, ka, Vd, CL,t,ratio){

# Reference

ke <- CL/Vd

C <- C.sd(D=D, Vd=Vd, ka=ka, ke=ke, t=t)

tmax <- t[C == max(C)][1]

Cmax <- C.sd(D=D, Vd=Vd, ka=ka, ke=ke, t=tmax)

AUC.t <- AUCcalc(t, C)

t.1 <- t[which(t <= tmax)]

t.cut <- max(t.1)

C.1 <- C[which(t <= t.cut)]

pAUC <- AUCcalc(t.1, C.1)

Cmax.AUC <- Cmax/AUC.t

# Tests

ka.t <- ka*ratio # Tests' ka

res <- data.frame(kaR=ka, kaT_kaR=ratio, kaT=signif(ka.t, 5),

Cmax=NA, Cmax.r=NA, pAUC=NA, pAUC.r=NA,

Cmax_AUC=NA, Cmax_AUC.r=NA)

for (j in seq_along(ratio)) {

# full internal precision, 4 significant digits for output

C.tmp <- C.sd(D=D, Vd=Vd, ka=ka.t[j], ke=ke, t=t)

if (!identical(ka.t[j], ke)) { # ka != ke

tmax.tmp <- log(ka.t[j]/ke)/(ka.t[j] - ke)

} else { # ka = ke

tmax.tmp <- 1/ke

}

Cmax.tmp <- C.sd(D=D, Vd=Vd, ka=ka.t[j], ke=ke, t=tmax.tmp)

res[j, "Cmax"] <- signif(Cmax.tmp, 4)

res[j, "Cmax.r"] <- signif(Cmax.tmp/Cmax, 4)

AUC.t.tmp <- AUCcalc(t,C.tmp)

t.1.tmp <- t[which(t <= t.cut)]

C.1.tmp <- C.tmp[which(t <= t.cut)] # cut at tmax of R!

pAUC.tmp <- AUCcalc(t.1.tmp, C.1.tmp)

res[j, "pAUC"] <- signif(pAUC.tmp, 4)

res[j, "pAUC.r"] <- signif(pAUC.tmp/pAUC, 4)

res[j, "Cmax_AUC"] <- signif(Cmax.tmp/AUC.t.tmp, 4)

res[j, "Cmax_AUC.r"] <- signif((Cmax.tmp/AUC.t.tmp)/Cmax.AUC, 4)

}

return(res)

}

SubjectsDF <- data.frame()

for(isub in 1:Nsub){

# sampling parameters

ka.sub <- ka * exp(rnorm(1, sd = sqrt(ka.omega)))

Vd.sub <- Vd * exp(rnorm(1,sd = sqrt(Vd.omega)))

CL.sub <- CL * exp(rnorm(1,sd = sqrt(CL.omega)))

DF.sub <- cbind(Subject = isub, V = Vd.sub, CL = CL.sub, AbsorptionDF(D, ka.sub, Vd.sub, CL.sub, t, ratio))

SubjectsDF <- rbind(SubjectsDF, DF.sub)

}

SubjectsDFstack <-

reshape(SubjectsDF[, -c(2,3,4,6,7,9,11)],

direction = 'long', varying = 3:5, v.names = "ratio", timevar = "metric", times = names(SubjectsDF)[3:5]) # hate this one!

ggplot(SubjectsDFstack, aes(x=kaT_kaR, y=ratio, color=factor(metric)) ) +

theme_bw() +

geom_point(size=.3) +

geom_smooth(method = 'loess', se = FALSE) +

stat_density_2d(data = subset(SubjectsDFstack, metric == unique(SubjectsDFstack$metric)[1]), geom = "raster", aes(alpha = ..density..), fill = "#F8766D" , contour = FALSE) +

stat_density_2d(data = subset(SubjectsDFstack, metric == unique(SubjectsDFstack$metric)[2]), geom = "raster", aes(alpha = ..density..), fill = "#6daaf8" , contour = FALSE) +

stat_density_2d(data = subset(SubjectsDFstack, metric == unique(SubjectsDFstack$metric)[3]), geom = "raster", aes(alpha = ..density..), fill = "#6df876" , contour = FALSE) +

scale_alpha(range = c(0, 0.7)) +

scale_x_continuous(trans='log2') +

scale_y_continuous(trans='log')

]]>

Hi nobody,

method of residuals, feathering, Wagner-Nelson? Smells of modeling which is not acceptable in BE. BTW, lag-times are the killer. Good luck!

- Nerella NG, Block NH, Noonan PK.
*The Impact of Lag Time on the Estimation of Pharmacokinetic Parameters. I. One Compartment Model.*Pharm Res. 1993;10(7):1031–6.

- Csizmadia F, Endrényi L.
*Model-Independent Estimation of Lag Times with First-Order Absorption and Disposition.*J Pharm Sci. 1998;87(5):608–12. doi:10.1021/js9703333

Hi Sury,

No idea about

`Proc Power`

. Let’s try the example of FARTSSIE2.4 (which is based on Julious’ Example 4.1.1.1.*) in `PowerTOST`

:`library(PowerTOST)`

design <- "parallel" # Well...

desired <- 0.90 # Target power

alpha <- 0.025 # Probability of type I error

sigma <- 40 # Common (pooled) standard deviation

margin <- 10 # Maximum allowed difference

mean.A <- 160 # Test

mean.B <- 158 # Reference

theta0 <- mean.A - mean.B # Expected difference

if (theta0 > 0) theta0 <- -theta0 # Force non-inferiority

logscale <- FALSE

sampleN.noninf(alpha=alpha, CV=sigma, logscale=logscale, margin=margin,

theta0=theta0, targetpower=desired, design=design)

`++++++++++++ Non-inferiority test +++++++++++++`

Sample size estimation

-----------------------------------------------

Study design: 2 parallel groups

untransformed data (additive model)

alpha = 0.025, target power = 0.9

Non-inf. margin = 10

True diff. = -2, CV = 40

Sample size (total)

n power

470 0.900652

Sure.

That’s also an estimate. The true value is unknown.

Not sure what you mean here. Can you try to explain?

- Julious SA.
*Sample sizes for clinical trials with Normal data.*Stat Med. 2004;23(12):1949–50. doi:10.1002/sim.1783

PS: How about calculating the slope from at least 3 points before cmax (excluding cmax) and calling it, let's say, "invasion constant", which should be (if CL is constant) a measure for ka, or?]]>