Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-08 15:17
(48 d 03:45 ago)

Posting: # 22918
Views: 1,247
 

 EMA: New product-specific guidances [BE/BA News]

Dear all,

on April 04 the EMA published revised draft product-specific guidances for ibuprofen, paracetamol, and tadalafil. In the footnote on page 1 we find:

* This revision concerns defining what is meant by ‘comparable’ Tmax as an additional main pharmacokinetic vari­able in the bioequivalence assessment section of the guideline.

Then:

Bioequivalence assessment: Comparable median (≤ 20% difference) and range for Tmax.


See there why I consider this invention crap.
In short: tmax follows a discrete distribution on an [image] interval scale. Calculating the ratio of values is a questionable procedure.

End of consultation 31 July 2022.

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-04-09 11:45
(47 d 07:17 ago)

(edited by ElMaestro on 2022-04-09 12:06)
@ Helmut
Posting: # 22919
Views: 1,074
 

 How would you implement it?

Hi Helmut and all,

I was a little afraid of this. Thanks a lot for posting.

» Comparable median (≤ 20% difference) and range for Tmax

I am confused. I can see the point in regulating the matter, but I feel there is a lot that's left to be answered.

But let me ask all of you.

Question 1
Do you read this as: You need to be comparable (20% diff) for the median AND for the range? (i.e. is there also a 20% difference requirement for the range???)

» Calculating the ratio of values is a questionable procedure.

Question 2:
Whether we like it or not, we have to find a way forward. And this has a lot of degrees of freedom. In a nonparametric universe where we try to resolve it There could be all sorts of debate re. Hodges-Lehman, Kruskal-Wallis, Wilcoxon test, confidence levels, bootstrapping, pairing and what not.

So, kindly allow me to throw this on the table:

Imagine we have these Tmax levels for T and R in a BE trial, units could be hours.
Tmax.T=c(2.0, 2.5, 2.75, 2.5, 2.0, 2.5, 1.5, 2.0)
Tmax.R=c(4.0, 3.0, 3.5, 3.75, 1.5, 3.5, 5.5, 4.5)


Let us implement something that has the look and feel of a test along the lines of what regulators want.

Test1=wilcox.test(Tmax.T,  alt ="two.sided", conf.int = T, correct=T, conf.level=.90)
Test1$conf.int;

I am getting a CI of 1.999929-2.500036 (2 to 2.5).
We can compare this with
median(Tmax.R)*c(0.8, 1.2)
The test fails. Would that be a way to go?

No? then how about:
Test2=wilcox.test(Tmax.T/Tmax.R,  alt ="two.sided", conf.int = T, correct=T, conf.level=.90)
Test2
Test2$conf.int;


I am getting 0.47-0.92. Not within 0.8 to 1.25, test fails.

Shoot me.
How would you prefer to implement the comparability exercise for Tmax? (I am not so much interested in your thoughts on alpha/confidence level, exact T or F, etc. I am mainly interested in a way to make the comparison itself, so please make me happy and focus on that :rotfl:).

Mind you, the data above might be paired :-D .... or it might not, depends on whether it was from an XO or not. This add complexity, all depending on the implementation.

And, question 3, if the comparability thing also applies to range, how to implement that?

And question 4, sample size calculation is going to get messy for these products, if we have to factor in comparability of Tmax at the 20% level. I am not outright saying I have a bad taste in my mouse, but I am leaning towards thinking this could easy translate into a complete showstopper for sponsors developing the products. What's your gut feeling?

At the end of the day answers to Q1-Q4 above hinge not only on what you think is the right thing to do; of equal importance is what you think regulators will accept. :-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-09 18:40
(47 d 00:22 ago)

@ ElMaestro
Posting: # 22920
Views: 1,050
 

 Confuse a Cat Inc.

Capt’n, my capt’n!

» » Comparable median (≤ 20% difference) and range for Tmax
» Do you read this as: You need to be comparable (20% diff) for the median AND for the range? (i.e. is there also a 20% difference requirement for the range???)

Oh dear, I missed that! The range has a breakdown point of 0 – even if all values are identical except one of them, this single value will change the range. On the other hand, if you have two ’contaminations’ on opposite sides, the range will be the same. [image]-script for simulations acc. to my experiences with ibuprofen at the end. Gives:

         Min. 1st Qu.  Median 3rd Qu.    Max. Range
  T   1.00000 1.33333 1.83333 2.16667 2.50000   1.5
  R   0.83333 1.29167 1.58333 2.16667 2.33333   1.5
T - R 0.16667 0.04167 0.25000 0.00000 0.16667   0.0
T / R 1.20000 1.03226 1.15789 1.00000 1.07143   1.0

Close shave for the median (+16%). Here no problems with the range because we have two contaminations.
A goody: Replace in the script the lines

x <- rtruncnorm(n = n, a = a, b = b, mean = mu, sd = mu * CV)
T <- R <- roundClosest(x, spl)
T[T == min(T)] <- max(T)
R[R == max(R)] <- min(R)

by

R <- roundClosest(rtruncnorm(n = n, a = a, b = b, mean = mu, sd = mu * CV), spl)
T <- R + 20 / 60

to delay T by 20 minutes.

         Min. 1st Qu.  Median 3rd Qu.    Max.   Range
  T   1.16667 1.66667 2.00000 2.50000 2.83333 1.66667
  R   0.83333 1.33333 1.66667 2.16667 2.50000 1.66667
T - R 0.33333 0.33333 0.33333 0.33333 0.33333 0.00000
T / R 1.40000 1.25000 1.20000 1.15385 1.13333 1.00000


» » Calculating the ratio of values is a questionable procedure.
»
» Whether we like it or not, we have to find a way forward. And this has a lot of degrees of freedom.

I’ve read a lot in the meantime. Still not sure whether it is allowed at all (‼) to calculate a ratio of discrete values with potentially unequal intervals.

» In a nonparametric universe where we try to resolve it There could be all sorts of debate re. Hodges-Lehman, Kruskal-Wallis, Wilcoxon test, confidence levels, bootstrapping, pairing and what not.

Didn’t have the stamina to figure out why you get so many warnings in your code. However, you are aware that nonparametrics gives the EMA an anaphylactic shock?

» Shoot me.

Later.

» How would you prefer to implement the comparability exercise for Tmax? (I am not so much interested in your thoughts on alpha/confidence level, exact T or F, etc. I am mainly interested in a way to make the comparison itself, so please make me happy and focus on that :rotfl:).

I’m working on it.

» […] if the comparability thing also applies to range, how to implement that?

Sorry, I think that’s just bizarre. Honestly, despite you excellent exegesis, I guess (or rather hope?) that only the median is meant. If otherwise, wouldn’t the almighty oracle written this:

Comparable (≤ 20% difference) median and range for Tmax.


» […] sample size calculation is going to get messy for these products, if we have to factor in comparability of Tmax at the 20% level. I am not outright saying I have a bad taste in my mouse, but I am leaning towards thinking this could easy translate into a complete showstopper for sponsors developing the products. What's your gut feeling?

From some preliminary simulations I guess that we would need somewhat tighter sampling intervals than usual in order to ‘catch’ tmax in any and every case.

» At the end of the day answers to Q1-Q4 above hinge not only on what you think is the right thing to do; of equal importance is what you think regulators will accept. :-)

Of course. If we would go back to the 2001 Note for Guidance (and the current one of the WHO), with a nonparametric test and pre-specified acceptance range everything would be much easier.

P.S.: I updated the article. It’s a work in progress. Perhaps you will come up with more questions.


library(truncnorm)
roundClosest <- function(x, y) {
  # Round x to the closest multiple of y
  return(y * round(x / y))
}
sum.simple <- function(x) {
  # Nonparametric summary: Remove Mean but keep eventual NAs
  y <- summary(x)
  if (length(y) == 6) {
    y <- y[c(1:3, 5:6)]
  } else {
    y <- y[c(1:3, 5:7)]
  }
  names <- c(names(y), "Range")
  y[length(y) + 1] <- y[length(y)] - y[1]
  y     <- setNames(as.vector(y), names)
  return(y)
}
set.seed(123456)
spl <-  10 / 60 # Sampling every 10 (!) minutes
a   <-  40/60   # Lower limit of truncated normal (40 minutes)
b   <- 150/60   # Upper limit of truncated normal (2.5 hours)
mu  <- 100/60   # Common for ibuprofen
CV  <- 0.50     # Realistic acc. to my studies
n   <- 16       # Sufficient given the low variability
x   <- rtruncnorm(n = n, a = a, b = b, mean = mu, sd = mu * CV)
T   <- R <- roundClosest(x, spl) # both are identical!
# ‘Contaminations’
T[T == min(T)] <- max(T)
R[R == max(R)] <- min(R)
res <- as.data.frame(matrix(ncol = 6, nrow = 4,
                            dimnames = list(c("  T", "  R",
                                              "T - R", "T / R"),
                                            names(sum.simple(T)))))
res[1, ] <- sum.simple(T)
res[2, ] <- sum.simple(R)
res[3, ] <- res[1, ] - res[2, ]
res[4, ] <- res[1, ] / res[2, ]
print(round(res, 5))


Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-04-09 21:47
(46 d 21:14 ago)

@ Helmut
Posting: # 22921
Views: 1,007
 

 Confuse a Cat Inc.

Hi Hötzi,

should we submit a dataset to EMA and suggest them to publish it along with a description (numerical example) of how exactly they wish to derive the decision?

When FDA indicated that they were going in the direction of in vitro popBE for inhalanda and nasal sprays they published a dataset and showed exactly how to process the data to figure out the pass / fail criterion that satisfies the regulator. If EMA would do the same here we'd have all doubt eliminated.
I think we need to know exactly:
1. Do we use nonparametrics or not?
2. Do we use logs or not?
3. Is the decision of 20% comparability based on a confidence interval or on something else?
3a. If there is a CI involved, is it a 90% or 95% CI or something else?
4. Are we primarily working on ratio or on a difference?
5. Is the bootstrap involved?
6. How should we treat datasets from parallel trials, and how should we treat data from XO (i.e. how to handle considerations of paired and non-paired options)?

My gut feeling is that they want nonparametrics for the Tmax comparability part (yes I am aware of the sentence).

Actually, perhaps they just want the decision taken on basis of the estimates of medians and ranges from min to max?


If we submit a dataset, let us make sure we submit one with ties (the one I pasted above had none).

Pass or fail!
ElMaestro
Ohlbe
★★★

France,
2022-04-11 11:29
(45 d 07:33 ago)

@ ElMaestro
Posting: # 22923
Views: 934
 

 Confuse a Cat Inc.

Hi ElMaestro,

» My gut feeling is that they want nonparametrics for the Tmax comparability part (yes I am aware of the sentence).

My gut feeling is that all they expect to get are descriptive statistics: report median Tmax for Test, median Tmax for Reference, calculate a % difference (however inappropriate this may be), pass if it is not more than 20%, otherwise fail.

Consequence: if you have more than 20% difference between sampling times around the expected Tmax, you're screwed if median Tmax values are different even by just one sampling time, even if this has strictly no clinical relevance (this could be brought up in the comments to the draft guideline: come on guys, are you sure a Tmax of 10' for one formulation and 15' for the other is really something totally unacceptable ? I mean, even for tadalafil you should be able to keep yourself busy until it works).

Range: no expectation described. No idea.

Of course I may be totally wrong.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-11 13:59
(45 d 05:03 ago)

@ Ohlbe
Posting: # 22925
Views: 914
 

 Confuse a Cat Inc.

Hi Ohlbe,

» My gut feeling is that all they expect to get are descriptive statistics: report median Tmax for Test, median Tmax for Reference, …

So far so good. Standard for ages.

» … calculate a % difference (however inappropriate this may be), …

It is indeed.

» … pass if it is not more than 20%, otherwise fail.

That’s my understanding as well.

» Consequence: if you have more than 20% difference between sampling times around the expected Tmax, you're screwed if median Tmax values are different even by just one sampling time, …

Correct. IMHO, you need
  • equally spaced intervals until absorption is essentially complete (in a one compartment model at least two times the expected tmax in all subjects) and
  • likely narrower intervals than usual.
As shown in my example in the other post tmax will drive the sample size. How much larger will it have to be? Not the slightest idea. Likely much larger.

» … even if this has strictly no clinical relevance (this could be brought up in the comments to the draft guideline: come on guys, are you sure a Tmax of 10' for one formulation and 15' for the other is really something totally unacceptable ?

Exactly. Recall what the almighty oracle stated in the BE-GL:

[…] if rapid release is claimed to be clinically relevant and of importance for onset of action or is related to adverse events, there should be no apparent difference in median tmax and its variability between test and reference product.

It boils down to: Is it clinically relevant? If not, a comparison is not required. Furthermore: PK PD.

» I mean, even for tadalafil you should be able to keep yourself busy until it works).

Tadalafil shows an effect before tmax. So what?
Not by chance. It’s common that the time point of Emax is < tmax.

» Range: no expectation described. No idea.

The range is completely useless. Like the mean it has a [image] breakdown point of zero. Imagine with \(\small{n\rightarrow \infty}\): $$\small{\left\{R_1=1,\ldots, R_n=1\phantom{.25}\right\} \rightarrow \textrm{Range}(R)=0\phantom{.25}}\\\small{\left\{T_1=1,\ldots, T_n=1.25\right\} \rightarrow \textrm{Range}(T)=0.25}$$ Good luck in calculating a ratio.

» Of course I may be totally wrong.

So am I.

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2022-04-11 15:48
(45 d 03:14 ago)

@ Helmut
Posting: # 22926
Views: 871
 

 Range

Hi Helmut,

» » Range: no expectation described. No idea.
»
» The range is completely useless. Like the mean it has a [image] breakdown point of zero. Imagine with \(\small{n\rightarrow \infty}\): $$\small{\left\{R_1=1,\ldots, R_n=1\phantom{.25}\right\} \rightarrow \textrm{Range}(R)=0\phantom{.25}}\\\small{\left\{T_1=1,\ldots, T_n=1.25\right\} \rightarrow \textrm{Range}(T)=0.25}$$ Good luck in calculating a ratio.

I guess it depends what they mean by "range". Are they using the word in its statistical meaning (difference between the largest and smallest values) or in its lay language meaning (limits between which something varies) ? I suspect the latter: lowest and highest observed Tmax values.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-11 16:07
(45 d 02:55 ago)

@ Ohlbe
Posting: # 22927
Views: 871
 

 Range

Hi Ohlbe,

» I guess it depends what they mean by "range". Are they using the word in its statistical meaning (difference between the largest and smallest values) or in its lay language meaning (limits between which something varies) ? I suspect the latter: lowest and highest observed Tmax values.

You mean that we only have to report the minimum and maximum tmax of T and R? But how should we understand this:

Comparable  median (≤ 20% difference) and  range for Tmax.

How to decide what is ‘comparable’? Ask ten people and get twelve answers?
The range it given in the section ‘Bioequivalence assessment’. We report other stuff as well (λz/t½, AUC0–t/AUC0–:blahblah:). So why does it sit there?

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2022-04-11 16:20
(45 d 02:42 ago)

@ Helmut
Posting: # 22928
Views: 871
 

 Range

Hi Helmut,

» But how should we understand this:

Comparable  median (≤ 20% difference) and  range for Tmax.


Oh, the exact same way as before: nobody knows.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-11 13:03
(45 d 05:59 ago)

@ ElMaestro
Posting: # 22924
Views: 922
 

 So many questions, so few answers

Hi ElMaestro,

» should we submit a dataset to EMA and suggest them to publish it along with a description (numerical example) of how exactly they wish to derive the decision?

Maybe better not only a data set but also a proposed method for evaluation.

» When FDA indicated that they were going in the direction of in vitro popBE for inhalanda and nasal sprays they published a dataset and showed exactly how to process the data to figure out the pass / fail criterion that satisfies the regulator.

I have a counter-example. This goody was recommended in Apr 2014 and revised in Jan 2020. Nobody knew how the FDA arrived at the BE-limits and why. Last month I reviewed a manuscript explaining the background. Made sense but for years it was mystery.

» If EMA would do the same here we'd have all doubt eliminated.

Utopia. Notoriously the EMA comes up with unsubstantiated ‘inventions’ and leaves it to us to figure out if and how they work. Examples?

Rounded regulatory constant k = 0.760 and upper cap at CVwR = 50% in reference-scaling, sequence(stage) term in TSDs, ‘substantial’ accumulation if AUC0–τ > 90% of AUC0–, for partial AUCs default cut-off time τ/2, Css,min for originators but Css,τ for generics, you name it.


» I think we need to know exactly:
» 1. Do we use nonparametrics or not?

Guess.

» 2. Do we use logs or not?

Logs? Possibly tmax follows a [image] Poisson distribution.  I will try to compile data from my studies.  See Fig. 2 and Fig. 3 in the article.

» 3. Is the decision of 20% comparability based on a confidence interval or on something else?

Likely the former; made up out of thin air.

» 3a. If there is a CI involved, is it a 90% or 95% CI or something else?

90%.

» 4. Are we primarily working on ratio or on a difference?

IMHO, calculating ratios of discrete values with potentially unequal intervals is nonsense.

» 5. Is the bootstrap involved?

A possible approach but why?

» 6. How should we treat datasets from parallel trials, and how should we treat data from XO (i.e. how to handle considerations of paired and non-paired options)?

I would suggest the Mann–Whit­ney U test (parallel) and the Wil­cox­on signed-rank test (paired / crossover). Requires some tricks in case of tied observations (practically always) for the exact tests, e.g., function wilcox_test() of package coin instead of [image] wilcox.test().

» My gut feeling is that they want nonparametrics for the Tmax comparability part (yes I am aware of the sentence).

I doubt it. Really.

»

Actually, perhaps they just want the decision taken on basis of the estimates of medians and ranges from min to max?


I think so (see also Ohlbe’s post). However, that’s statistically questionable (politely speaking). See the updated article and hit F5 to clear the browser’s cached version.

» If we submit a dataset, let us make sure we submit one with ties (the one I pasted above had none).

It’s extremely unlikely that you will find one without…

I explored one of my studies. Ibuprofen 600 mg tablets, single dose, fasting, 2×2×2 crossover, 16 subjects (90% target power for Cmax), sampling every 15 minutes till 2.5 hours. Re-sampled the reference’s tmax in 104 simulations and applied the ±20% criterion:

   median        re-sampled      pct.med.diff      pass.pct
Min.   :1.000   Min.   :1.250   Min.   :-50.000   Mode :logical
1st Qu.:1.250   1st Qu.:1.375   1st Qu.:-30.000   FALSE:5950
Median :1.500   Median :1.500   Median :  0.000   TRUE :4050
3rd Qu.:2.250   3rd Qu.:1.750   3rd Qu.: 25.000       
Max.   :2.500   Max.   :2.500   Max.   :125.000

[image]

Power compromised. In ≈60% of simulated studies the reference could not be declared equivalent to itself.
Bonus question: Which distribution?

The generic tested in this study was approved 25 years ago and is still on the market. Any problems?

If you want to give it a try:

R <- c(1.25, 2.00, 1.00, 1.25, 2.50, 1.25, 1.50, 2.25,
       1.00, 1.25, 1.50, 1.25, 2.25, 1.75, 2.50, 2.25)


P.S.: Amazing that this zombie rises from the grave. See this post and this thread of June 2013…

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-04-30 14:59
(26 d 04:03 ago)

@ ElMaestro
Posting: # 22945
Views: 451
 

 Preliminary simulations

Hi ElMaestro,

» 2. Do we use logs or not?

As the ‘Two Lászlós’ – who else? – wrote*

The concentrations rise more steeply before the peak than they decline following the true maximum response. Consequently, it is more likely that large observed concentrations occur after than before the true peak time (\(\small{T_\textrm{max}^\circ}\)).

Apart from this theoretical consideration they demonstrated in simulations that the distribution of the observed tmax is not only biased but skewed to the right.

I could confirm that in my studies. Given that, your idea of using logs in simulations (but not in the evaluation) is excellent! It turned out that a CV of 50% is not uncommon – even with a tight sampling schedule.

I set up simulations. Say, we have a drug with an expected median tmax of R of 1.5 hours. If we aim to power the study at 90% for a moderate CV of Cmax of 20% we need 26 subjects. Let’s sample every 15 minutes and assume a shift in tmax for T of 15 minutes (earlier). Result of 10,000 simulations:

Sample size based on conventional PK metric
  BE limits    : 80.00% to 125.00%
  theta0       : 95% (assumed T/R ratio)
  CV           : 20% (assumed)
  Power        : 90% (target)
  n            :
26  (estimated)
  Power        :
92% (achieved)
  Sampling     : every 15 minutes
  mu.tmax (R)  : 1.50 h    (assumed for R)
  CV of tmax   : 50%       (assumed)
  shift        : -15.0 min (assumed for T)
  spread       : ±30.0 min
Not equivalent if medians differ by more than 20%
{-18 min, +18 min}
  Empiric power:
76.00%

Oops!

How many subjects would we need to preserve our target power?

  n            : 78  (specified)
  Power        : 100%
Not equivalent if medians differ by more than 20%
{-18 min, +18 min}
  Empiric power:
90.71%

Three times as many subjects. Not funny.

[image]


Another issue is the assumed shift in location, which is nasty. I tried –20 minutes (instead of –15).

[image]


I stopped the iterations with 500 (‼) subjects (power ≈82%). I’m not sure whether 90% is attainable even with a more crazy sample size. Note that the x-axis is in log-scale.

On the other hand, the spread is less important, since we are assessing medians. In other words, extreme values are essentially ignored. Then with the original shift of –15 minutes but a spread of ±60 minutes (instead of 30), we have already ≈80% power with 36 subjects and ≈90% with 66.

[image]


Is this really the intention? The wider the range in tmax, the more easily products will pass. Counterintuitive.

Furthermore, as a consequence of the skewed distribution power curves are not symmetrical around zero – what the PKWP possibly (or naïvely?) assumed. It reminds me on the old days when the BE-limits were 80–120% and maximum power was achieved at a T/R-ratio of ≈0.98 (see there).

[image]


Consequently, a ‘faster’ test will more likely pass than a ‘slower’ one. That’s not my understanding of equivalence. Guess the Type I Error.


  • Tóthfálusi L, Endrényi L. Estimation of Cmax and Tmax in Populations After Single and Multiple Drug Ad­mi­ni­stra­tion. J Pharma­co­kin Pharma­codyn. 2003; 30(5): 363–85. doi:10.1023/b:jopa.0000008159.97748.09.

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-04-30 19:10
(25 d 23:52 ago)

@ Helmut
Posting: # 22946
Views: 422
 

 Preliminary simulations

Hi Hötzi,

your post is potentially highly significant.

» Is this really the intention? The wider the range in tmax, the more easily products will pass. Counterintuitive.

The intention, as I understand it, was exactly the opposite. It goes completely against all intention, doesn't it?
I think this knowledge, if it holds in confirmatory simulations, should be published quickly and made available to regulators.

I very much hope that regulators will abstain completely from letting pride prevail over the regard for the EU patient. So I hope they will not dismiss the argumentation. For example, I could fear they would dismiss your findings because they don't have a palate for simulations (but note they like simulations well enough when it comes to f2; bootstrapping is a simulation, too).

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-01 15:56
(25 d 03:06 ago)

@ ElMaestro
Posting: # 22947
Views: 381
 

 Preliminary simulations

Hi ElMaestro,

» your post is potentially highly significant.

Potentially!

» » Is this really the intention? The wider the range in tmax, the more easily products will pass.
»
» The intention, as I understand it, was exactly the opposite. It goes completely against all intention, doesn't it?

Right, I don’t get it. I’m afraid, this is one joins the ‘methods’ of the PKWP made up out of thin air.

EMEA. The European Medicines Evaluation Agency.
The drug regulatory agency of the European Union.
A statistician-free zone.
Stephen Senn. Statistics in Drug Development. Wiley; 2004. p. 386.

‘Common sense’ is not always a good idea. The obvious is most often wrong.

It’s complicated. Like yesterday but this time no shift.

[image]


That’s what I expect.

» I think this knowledge, if it holds in confirmatory simulations, should be published quickly and made available to regulators.

[image]That’s wishful thinking taking the review process into account. Deadline for comments July 31st.

» […] I could fear they would dismiss your findings because they don't have a palate for simulations (but note they like simulations well enough when it comes to f2; bootstrapping is a simulation, too).

Yep. ƒ2 is not a statistic (depends on the number of samples and intervals). Given that, no closed form to estimate the location, its CI, power (and hence, the Type I Error) exists because the distribution is unknown. That’s similar to the situation we are facing here.


PS: Du hast Mehl in deiner Mehlkiste.

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-02 13:43
(24 d 05:19 ago)

@ ElMaestro
Posting: # 22950
Views: 337
 

 Simulated distributions

Hi ElMaestro,

» 2. Do we use logs or not?

To get an idea about the simulated distributions, a large sample size and sampling every ten minutes. Shift of T –15 minutes, spread ±30 minutes.

[image]


Why do we see for T a high density in the first interval? Try this one:

roundClosest <- function(x, y) {
  res           <- y * round(x / y)
  res[res <= 0] <- y # Force negatives | 0 to the 1st interval
  return(res)
}
HL <- function(x, na.action = na.omit) {
  x   <- na.action(x)
  y   <- outer(x, x, "+")
  res <- median(y[row(y) >= col(y)]) / 2
  return(res)
}
n       <- 500
mu.tmax <- 1.5 # hours
CV.tmax <- 0.5
shift   <- -15 # minutes
spread  <- 30  # --"--
smpl    <- 10  # --"--
# Sample tmax of R from lognormal distribution
# parameterized for the median

R       <- rlnorm(n = n, meanlog = log(mu.tmax),
                         sdlog = sqrt(log(CV.tmax^2 + 1)))
# Introduce shift in location of T
shifts  <- runif(n = n, min = shift / 60 - spread / 60, # can be negative!
                        max = shift / 60 + spread / 60)
T       <- R + shifts
# Discretization by the sampling interval 'smpl'
R       <- roundClosest(R, smpl / 60)
T       <- roundClosest(T, smpl / 60)
res     <- data.frame(min    = rep(NA_real_, 3),
                      median = NA_real_,
                      HL     = NA_real_,
                      max    = NA_real_)
row.names(res)    <- c("R", "T", "T-R")
res[1, c(1:2, 4)] <- quantile(R, probs = c(0, 0.5, 1))
res[1, 3]         <- HL(R)
res[2, c(1:2, 4)] <- quantile(T, probs = c(0, 0.5, 1))
res[2, 3]         <- HL(T)
res[3, 2:3]       <- res[1, 2:3] - res[2, 2:3]
print(round(res, 5))

        min  median      HL     max
R   0.33333 1.50000 1.58333 7.00000
T   0.16667 1.33333 1.33333 6.66667
T-R      NA 0.16667 0.25000      NA


Since some shifts might be negative, I forced them to the first sampling interval – adding to the ones which are already there. Does that make sense?

Dif-tor heh smusma 🖖 [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,105 posts in 4,630 threads, 1,566 registered users;
online 4 (0 registered, 4 guests [including 2 identified bots]).
Forum time: Thursday 19:02 CEST (Europe/Vienna)

Operational hectic replaces
intellectual calms.    Alexander Huiskes

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5