Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-22 16:30
(5717 d 19:56 ago)

Posting: # 2233
Views: 25,310
 

 Critical review of EU BE guideline (Rev.1) [Regulatives / Guidelines]

Dear all!

After the draft was published yesterday for a consultation period until January 31st, 2009 I want to start a discussion of critical points.
Some of them are already given in this thread. Feel free to join the party.

Acceptance limits, page 14 (lines 553-554)
'Confidence intervals should be presented to two decimal places. To be inside the acceptance interval the lower bound should be ≥ 80.00 and the upper bound should be ≤ 125.00.'
Wonderful. All the software I know performs a comparison of the CI with the acceptance range internally in full precision and prints out something like 'BE demonstrated at alpha=0.05' or 'Failed to demonstrate BE', or - if the entire CI is outside the AR – 'BioINequivalence demonstrated…'. It's easy to round the CI for the presentation, but it's not possible to change the internal algorithm of a commercial program. If following this logic one should come up with, let's say a lower CL of 79.995 (from the calculation), the sentence 'Failed to show average bioequivalence for confidence=90.00 and percent=20.0.' (WinNonlin) and a rounded result of 80.00 which supports BE??

Acceptance limits, page 14 (lines 555-557)
Target parameter of early exposure (if clinically relevant) is partial AUC truncated at the median tmax of the reference (lines 718-719). tmax is dropped from evaluation. This is in accordance with FDA’s guideline. Same goalposts as for Cmax apply (i.e., widening should be possible for HVDs in a replicate design study only; see Section 4.1.10, page 16, lines 631-641).
This is funny, because one will rarely find any reference giving values for the partial AUC truncated at tmax in the literature (at least I don’t remember any). So if you have a claim on early exposure, a pilot study is mandatory for sample size planning (preferrably in a replicated design - variability may hit you). Endrényi, Tóthfalusi, and Midha have shown that Cmax is a lously metric for the rate of absorption (based on simulations) - but on the other hand I haven't seen any study demonstrating that truncated AUC performs ‘better’ than tmax. So what’s the rationale - except harmonization with the FDA?
Since I do have a large database of studies I’m in the mood of reanalysing them and become famous. :-D

Acceptance limits, page 14 (lines 558-559)
‘[…] at steady state AUCtau, Cmax,ss, and Cmin,ss should be analysed using the same acceptance interval as stated above.’
Cmin,ss was added probably after concerns for oxycodone, but this metric will be rather tough to meet for some drugs. Since scaling is not allowed sample sizes are expected to be very high (for HVDs even in steady state the variability of Css,min » Css,max).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-08-22 17:42
(5717 d 18:44 ago)

@ Helmut
Posting: # 2234
Views: 22,728
 

 Critical review of EU BE guideline (Rev.1)

Dear all,

here are some of my additional points:

Evaluation of data from several bioequivalence studies, page 13 (lines 533-542)
'1. If after the failed trial or trials some well justified modifications have been made [...] A positive study in this situation is not downgraded by previous negative results.'

Da bin ich aber froh! (How lucky I am). :-D
I can not get the point. What has the failed study to do with the study now showing BE? The formulation for which now BE is claimed is the reformulated, a new product.
Generalizing this would mean that all unsuccessfull efforts and attempts during pharmacists or other work within the development must be reported.

Evaluation of data from several bioequivalence studies, page 13/14 (lines 537-542)
'2. [...] It is not acceptable to pool together two ambigous studies to reach a positive conclusion.'

This may be OK from the regulators point of view (He would like to see at least one study showing BE), but it is against all principles of a meta analysis. Pooling together studies and obtaining a clearer picture is why meta analysis was invented.

Evaluation of data from several bioequivalence studies, page 14 (lines 543-547)
'3. If a failed study(s) clearly show that the test product is bioinequivalent [...] additional study(s) will be needed until evidence for bioequivalence clearly outweights the evidence against [...] It is not acceptable to pool together positive and negative studies in a meta analysis.'

This is nonsense IMHO. How many studies must be done additionally until achieving outweighting evidence? 1, 10, 100 or thousand? Since we are dealing with statistics we have alway the chance of an unlucky finding (set by our alpha to 5% for the patient and by sample size planning to 20% for the producer, if we set power to 80%)! Again the last sentence is against the principles of meta analysis.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-23 04:02
(5717 d 08:24 ago)

@ d_labes
Posting: # 2236
Views: 22,631
 

 Critical review of EU BE guideline (Rev.1)

Dear D. Labes!

❝ I can not get the point. What has the failed study to do with the study now showing BE? The formulation for which now BE is claimed is the reformulated, a new product.

❝ Generalizing this would mean that all unsuccessfull efforts and attempts during pharmacists or other work within the development must be reported.


Exactly! See Section 4.1 (page 5, lines 142-144):
'The clinical overview [...] should list all studies carried out with the product applied for. All bioequivalence studies [...] must be submitted.'

Evaluation of data from several bioequivalence studies, page 13/14 (lines 537-542)

❝ '2. [...] It is not acceptable to pool together two ambigous studies to reach a positive conclusion.'

❝ This may be ok from the regulators point of view (He would like to see at least one study showing BE), but it is against all principles of a meta analysis. Pooling together studies and obtaining a clearer picture is why meta analysis was invented.


An anecdote from my side to lift your mood:
Recently we had a discussion with one of the members of the 'pharmacokinetolophystic subgroup' (©2008 by ElMaestro) about a hypothetical example.
Study 1 in 24 subjects; 1 subject showing a low response of the reference, not BE due to variability higher than the one used in sample size planing. BE in nonparametric analysis and additional parametric analysis after removal of the outlier (even if planned in the protocol): --> not acceptable
Study 2 is Study 1 repeated in the same 24 subjects; the outlying subject from Study 1 shows a 'normal' response, BE demonstrated: --> acceptable, even if the regulator knows of Study 1.
Why? We have a tendency to believe in repeated values (just think about laboratory screenings). Maybe results seen in Study 2 were just by chance and Study 1 'correct' (subpopulation?).
Anyhow, pooling of Studies 1 & 2: --> not acceptable
Again, why?
Study 3 in 48 subjects; 1 subject showing a low response of the reference, but BE demonstrated due to lower (by sqrt(2)) variability as compared to Study 1: --> acceptable

Do you get the idea? :no:

4.1.2 Reference and test product (page 6, lines 193-194)
'When variations to a generic product are made, the comparative medicinal product for the bioequivalence study should be the reference medicinal product.'
IMHO, this a very good idea - nothing was stated about it in the 'old' NfG, and I think generally once a generic was approved, variations were studied in comparison to the company's own product - not the innovator.
Lines 202-207 are also a good idea - although a difference in content of more than 5% is rarely seen - it protects against surprises in the biostudy if ±5% are used in sample size planing.

4.1.2 Reference and test product (page 7, lines 218-220)
'The study sponsor will have to retain a sufficient number of all investigational product samples in the study for one year in excess of the accepted shelf life or two years after completion of the trial or until approval whichever is longer to allow re-testing, if it is requested by the authorities.'
Fine. And if a request comes, what about the results of re-testing years after the shelf life? Back-extrapolate with the parameters found in stability testing for the IMPs - endless discussions approaching.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-08-25 11:14
(5715 d 01:12 ago)

@ Helmut
Posting: # 2243
Views: 22,446
 

 Critical review of EU BE guideline (Rev.1)

Dear Helmut,

❝ An anecdote from my side to lift your mood: [...]


Thanks. But your anecdote is suited to deepen bad mood or to run screaming away.

❝ Do you get the idea? :no:


Inscrutable are the Regulator's ways!
His spirit bloweth where it listeth.

(The holy Bible).


4.1.2 Reference and test product (page 6, lines 193-194)

'When variations to a generic product are made, the comparative medicinal product for the bioequivalence study should be the reference medicinal product.'

❝ IMHO, this a very good idea - nothing was stated about it in the 'old' NfG, ...


I don't know if this is not an over-interpretation. Lines 190-192 state what you find a good idea, and I also.
But reference medicinal product is defined above in lines 177-183 as "... must be a medicinal product ... on the basis of a complete dossier ...". And this term is used in lines 193-194.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-25 13:06
(5714 d 23:19 ago)

@ d_labes
Posting: # 2246
Views: 22,319
 

 Variations

Dear D. Labes!

OK, reading all the sections again, I would say they can be shorted into something like: 'The comparative medicinal product in any bioequivalence study (including variations to a generic product) should be the reference medicinal product authorised in the Community, on the basis of a complete dossier [...].'

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Jaime_R
★★  

Barcelona,
2008-08-23 18:47
(5716 d 17:39 ago)

@ Helmut
Posting: # 2237
Views: 22,389
 

 Critical review of EU BE guideline (Rev.1)

Dear all,

a funny but common mistake:
4.1.4 Study conduct / Standardisation, page 8 (lines 261-263):

'The subjects should abstain from food and drinks, which may interact with circulatory, gastrointestinal, hepatic or renal function (e.g. alcoholic or xanthine-containing beverages or grapefruit juice) during a suitable period before and during the study.'


You will not find xanthine in food and beverages. Correct is methylxanthines, namely 1,3,7-trimethylxanthine (aka caffeine: coffee, tea, cola, energy drinks), 3,7-dimethylxanthine (aka theobromine: cocoa, chocolate), and 1,3-dimethylxanthine (aka theophylline: traces in tea). The term 'xanthines' for the methylated derivatives of xanthine - although widely used - is unknown in chemistry.

Regards, Jaime
Ravi
★    

India,
2008-08-25 10:19
(5715 d 02:07 ago)

@ Jaime_R
Posting: # 2242
Views: 22,407
 

 Critical review of EU BE guideline (Rev.1)

Dear All,

I have one query regarding the lines mentioned below.

It is acceptable to use a two-stage approach when attempting to demonstrate bioequivalence. An initial group of subjects can be treated and their data analysed. If bioequivalence has not been demonstrated an additional group can be recruited and the results from both groups combined in a final analysis. [Line 564-566, Page 14]

Now my question is: Suppose when data corresponding to initial group of subjects is analysed, it demonstrates Bioequivalence. Then what is to be done. We still have to go for additional group of subjects as mentioned in the protocol or we can stop at first step? Suppose we stop at first stage in that case whether study will be accepted by EMEA or not?

Thanks and Regards,

Thanks & Regards
Ravi Pandey
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-25 13:43
(5714 d 22:43 ago)

@ Ravi
Posting: # 2249
Views: 23,367
 

 Two Stage Design

Dear Ravi,

I would go with ‘Method C’ suggested by Potvin et al. (2007).* The design was validated in 1,000,000 simulated BE studies in numerous combinations of sample sizes (12–60 subjects in stage 1) and CVs (10–100%).
The α-level (patient’s risk) was maintained at ≤0.052 - which is not the case by methods currently applied in Canada and Japan (evaluation of stage 1 at α=0.05, simple pooling of a second stage; results presented in the reference as ‘Method A’).
Below an outline of the procedure:

[image]
For your example you have two possibilities to show BE already in stage 1:
  1. Power ≥80%: evaluation at α=0.0500
  2. Power <80%: evaluation at α=0.0294

  • Potvin D, Diliberti CE, Hauck WW, Parr AF, Schuirmann DJ, and RA Smith
    Sequential design approaches for bioequivalence studies with crossover designs
    Pharmaceut Statist (pre-print 2007), DOI: 10.1002/pst.294
    Online abstract

Edit: Our first study with this design was approved by one of Berlin’s IECs and the German BfArM in May 2009.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
JPL
☆    

Vienna,
2008-08-28 11:44
(5712 d 00:42 ago)

@ Helmut
Posting: # 2268
Views: 22,123
 

 Two Stage Design

Dear HS,

I don't have access to the paper, but it is an interesting issue. But what is meant by "evaluate the power at stage 1"? Am I right that only sample size is adjusted, but intial assumptions on difference and variance are used otherwise?
Regards,
JPL
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-28 13:50
(5711 d 22:36 ago)

@ JPL
Posting: # 2270
Views: 22,213
 

 Two Stage Design

Dear JPL!

❝ I don't have access to the paper, but it is an interesting issue.


You should get it for about 30 USD - it’s worth it and really quite nice to read. The study which formed the basis of the publication was performed by the ‘The Product Quality Research Institute’ (PQRI) which is a non-profit organization founded in 1999. Members amongst others are FDA-CDER, Health Canada, USP, AAPS, PhRMA. A working group on sequential designs in BE was founded in 2003. They are really heavy weights (definitely not some obscure back-yard statisticians).
Each of the four methods was tested in 1 million (!) of simulated BE studies, with the exception of method A – which is even more strict than a simple add-on design (because it contained a stop criterion if in the first part power >80%). Method A was not considered further, because the alpha-risk was inflated.

❝ But what is meant by "evaluate the power at stage 1"?


This is just the validated decision procedure between the branches.

❝ Am I right that only sample size is adjusted, but intial assumptions on difference and variance are used otherwise?


Essentially the sample size is updated only based on the variance of part 1. Although in adaptive designs it’s possible to update the sample size both on a new estimate of CV and the size of the effect (in the BE setting the ratio T/R), only the former was studied and validated in the study; the latter is considered to be evaluated by the group.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-08-28 14:09
(5711 d 22:16 ago)

@ Helmut
Posting: # 2271
Views: 22,130
 

 Two Stage Design

Dear HS and JPL!

❝ You should get it for about 30 USD - it's worth it and really quite nice to read. [...]



You can save your bucks. Write to Walter Hauck (wh "AT" usp.org) and ask for a reprint.
I have gotten mine within a day.
To cite Walter Hauck's answer:
"Good timing. I have just gotten this myself".

--
Edit: @ replaced with AT in the e-mail address to avoid address collection by spam robots [Ohlbe]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-11-05 14:22
(5642 d 21:04 ago)

@ Helmut
Posting: # 2634
Views: 21,536
 

 Sequential Design

Dear all,

according to a presentation given by Eric Ormsby (Health Canada, Therapeutic Products Directorate) at the BioInternational 2008 the method proposed by Potvin et al. (2007) “should be considered when variation and expected mean differences are uncertain” and will be implemented in the new Canadian guidances (drafts expected for late winter this year).
I talked to a couple of members of the EMEA’s PK group about ambiguities in the current draft and they suggested to send comments to EMEA especially referring to this paper.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
banusunman
☆    

2008-08-26 18:56
(5713 d 17:30 ago)

@ Jaime_R
Posting: # 2256
Views: 22,212
 

 Critical review of EU BE guideline (Rev.1)

Dear all,

Here is another one:

On page 5, line 150-151 — the criterion for "adequate wash-out period" is not defined. However, in the previous 2001 NfG, for steady-state designs, it was given as "at least 3-times the terminal half-life". In FDA's 2003 GfI, an adequate washout period was described as "e.g., more than 5 half lives of the moieties to be measured".

Do you think they forgot to define it.. or?

Regards,
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-10-15 18:02
(5663 d 18:24 ago)

@ banusunman
Posting: # 2537
Views: 21,469
 

 Washout

Dear Banu!

❝ On page 5, line 150-151 — the criterion for adequate wash-out period" is not defined. [...]


❝ Do you think they forgot to define it.. or?


You are right. This is in line with the section 'Number of subjects, lines 222-224', where the detailed specifications in the current NfG (variance, α, T/R ratio, power) are replaced by a laconic 'The number of subjects to be included in the study should be based on an appropriate sample size calculation.' ;-)

It's funny to notice that the intention of writing a Cookbook sometimes leads to washed-out phrases (in general, under certain circumstances, preferably, adequate, appropriate,...).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-08-25 11:49
(5715 d 00:37 ago)

@ Helmut
Posting: # 2245
Views: 22,364
 

 Critical review of EU BE guideline (Rev.1)

Dear all!

I have another one:

4.1.1 Study design, Alternative designs, page 5/6 (lines 163-169)
'A multiple dose study as an alternative to single dose may also be acceptable if problems of sensitivity of the analytical method precludes sufficiently precise plasma concentration measurements after single dose administration. As Cmax at steady state may be less sensitive to differences in the absorption rate than Cmax after single dose, bioequivalence should, if possible, be determined for Cmax after single dose administration (i.e. after the first dose [...]).

The last sentence excludes one of the possibilities for dealing with HVD's, namly a steady state study to reduce variability in Cmax in comparision to the single dose variability (if this is possible for the formulation under consideration).

It is required by this sentence that the whole concentration time course of the first dose has to be followed up by a sufficient sampling schedule.
This is in practice essentially the request of doing two studies: single dose and multiple dose, although combined.

It remains the secret of the authors of this DRAFT why the first dose Cmax should be a suitable metric or should be better in detecting differences in case of "... problems of sensitivity of the analytical method precludes sufficiently precise plasma concentration measurements after single dose administration ..."

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-25 14:11
(5714 d 22:15 ago)

@ d_labes
Posting: # 2250
Views: 22,370
 

 Critical review of EU BE guideline (Rev.1)

Dear D. Labes!

❝ '[...] As Cmax at steady state may be less sensitive to differences in the absorption rate than Cmax after single dose, bioequivalence should, if possible, be determined for Cmax after single dose administration (i.e. after the first dose [...]).'


❝ The last sentence excludes one of the possibilities for dealing with HVD's, namly a steady state study to reduce variability in Cmax in comparision to the single dose variability (if this is possible for the formulation under consideration).


Exactly. From a PK point of view it's possible to show that Cmax is less sensitive to differences then AUC if going from single dose to steady state (this statement is also given in the earlier sentence). For HVDs the option to go for steady state was eliminated for AUC (use a replicate design instead - which does not reduce the variability per se; since scaling is not allowed - large studies are approaching!), but left here if it's simply not possible to characterize the profile due to analytical problems. Since Cmax is less sensitive, any attempt should be made to 'catch' Cmax in single dose. Another example from the past are bisphosponates, where amount excreted in urine was accepted for extent of BA, but still Cmax from plasma was required by some countries' regulators.

❝ It is required by this sentence that the whole concentration time course of the first dose has to be followed up by a sufficient sampling schedule.


I don't think it does make sense to sample the entire profile, when you already know beforehand that you cannot come up with results >LLOQ in the later time points. But the conditions given somewhere else in the Guideline about selection of time points for the characterization of Cmax/tmax apply.

❝ This is in practice essentially the request of doing two studies: single dose and multiple dose, although combined.


Yes. Not only from the Guideline, but also from personal communications I had with some of the authors.

❝ It remains the secret of the authors of this DRAFT why the first dose Cmax should be a suitable metric or should be better in detecting differences in case of "... problems of sensitivity of the analytical method precludes sufficiently precise plasma concentration measurements after single dose administration ..."


As mentioned above it's possible to show it by simulations. I have the references somewhere, but since I'm leaving for vacation tommorow - it must wait until Sep 17th earliest. :-D
But you can do it on your own: Fire up some PK software (even M$-Excel would do the job) - define parameters of two hypothetical formulations in such a way that Cmax is let's say 10% apart - go into steady state - the difference should be smaller.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-08-25 15:03
(5714 d 21:23 ago)

@ Helmut
Posting: # 2251
Views: 22,236
 

 Critical review of EU BE guideline (Rev.1)

Dear HS,

thanks for clarification. My points where a bit a hasty without deeper thoughts :-D. Have a nice holiday.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-11-21 17:06
(5626 d 18:20 ago)

@ Helmut
Posting: # 2769
Views: 21,374
 

 Urban legend?

Dear all!

❝ Exactly. From a PK point of view it's possible to show that Cmax is less sensitive to differences then AUC if going from single dose to steady state (this statement is also given in the earlier sentence).

❝ [...] it's possible to show it by simulations. I have the references somewhere, [...]


❝ But you can do it on your own: Fire up some PK software (even M$-Excel would do the job) - define parameters of two hypothetical formulations in such a way that Cmax is let's say 10% apart - go into steady state - the difference should be smaller.


Really?
Or just an Urban legend of the pharmacokinetolophystic (© ElMaestro) type?

[image]single dose
AUCinf
T 84.02 ng*h/ml
R 93.35 ng*h/ml
T/R 90 %
Cmax
T 5.84 ng/ml
R 6.49 ng/ml
T/R 90 %
steady state
AUCtau
T 83.31 ng*h/ml
R 92.56 ng*h/ml
T/R 90 %
Cmax
T 10.84 ng/ml
R 11.64 ng/ml
T/R 90 %

T/Rsteady state = T/Rsingle dose = 90%, both for AUC and Cmax. Of course Cmax values in steady state are further apart (0.80 ng/ml) than after a single dose (0.65 ng/ml), but the ratio stays the same. Am I missing something?
If somebody is interested in repeating the simulation: I used WinNonlin's Exp1.pwo dataset (like in this post), and changed F from 1 (Reference) to 0.9 (Test).
  h    ng/ml
 0.1   2.6
 0.25  4.817
 0.5   6.596
 0.75  6.471
 1     6.049
 1.5   5.314
 2     4.611
 2.5   4.5
 3     4.392
 4     4.013
 5     3.698
 6     3.505
 8     3.382
12     2.844
14     2.383
24     1.287

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-11-24 16:06
(5623 d 19:20 ago)

@ Helmut
Posting: # 2790
Views: 21,252
 

 Urban legend?

Dear Helmut!

❝ ❝ But you can do it on your own: Fire up some PK software (even M$-Excel would do the job) - define parameters of two hypothetical formulations in such a way that Cmax is let's say 10% apart - go into steady state - the difference should be smaller.


Really? Or just an Urban legend of the pharmacokinetolophystic (© ElMaestro) type?


Self-doubts are the essence of real scientific work! :-D

❝ ... ratio stays the same. Am I missing something?


Try to change not F in the model, but rather the absorption constant ka. Or both.

Using a one-compartment model and the rate constants

ke=0.06
ka(test)=2
ka(ref)=4

I obtained (F, dose and Vc identical for both formulations)
- single dose Cmax ratio=0.9565
- steady state Cmax ratio=0.9711 (tau=8 h)
(Hope I have implemented the formulas in M$-Excel in a hurry right).

To quote your link to Wikipedia:
"Like all folklore, urban legends are not necessarily false but they are often distorted, exaggerated, or sensationalized over time.." ;-) .
This one and a half percent increase is not so impressive to me. Me be two-compartment models do it better. Or both changes in F and ka.

Regards,

Detlew
d_labes
★★★

Berlin, Germany,
2008-08-25 13:16
(5714 d 23:10 ago)

@ Helmut
Posting: # 2247
Views: 22,349
 

 Critical review of EU BE guideline (Rev.1)

Dear all!

to have it all in one place I transfer it from this thread:

Statistical analysis, page 13 (lines 506-521)
'The presentation of the findings of a bioequivalence trial should include a 2x2-table that presents for each sequence (in rows) and each period (in columns) means, standard deviations and number of observations for the observations in the respective period of a sequence. In addition, tests for difference and the respective confidence intervals for the treatment effect, the period effect, and the sequence effect should be reported for descriptive assessment. A test for carry-over should not be performed and no decisions regarding the analysis (e.g. analysis of the first period, only) should be made on the basis of such a test.

Tests for difference are not of interest in a BE study. Ok' they term it descriptive, but I have seen in the past requests to discuss significant treatment or period effects, although bioequivalence could proven. This situation I guess will become standard.

At least as far as the standard 2x2 cross-over studies are considered, sequence and carry-over effects are confounded. The same is true for higher order designs with more then 2 treatments.
Thus a required test for sequence effects and to avoid a test for carry-over is contradictory.

Minor: I cannot see any additional value in reporting tables of the sequence by period means, SD and n. What is required here? Tables for the raw values or for the log-transformed? IMHO this information is present in the ANOVA tables.

Regards,

Detlew
d_labes
★★★

Berlin, Germany,
2008-08-25 13:32
(5714 d 22:54 ago)

@ Helmut
Posting: # 2248
Views: 22,317
 

 Critical review of EU BE guideline (Rev.1), tmax, MRT, LZ

Dear all!

to have it all in one place I transfer it from this thread:

Statistical analysis, page 13 (lines 504-505)
A nonparametric analysis is not acceptable.

This cannot be discussed seriously without crying.

Moreover in the text any statistical analysis of tmax has be gone.
It is only under named 4.1.5 Characteristics to be investigated page 9, but not further.
See also HS entry post of this thread.

Also note that MRT has disappeared.
Newly mentioned is LambdaZ, which is nearly the same, but on an inverse scale, as half live t1/2.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-08-25 15:10
(5714 d 21:16 ago)

@ d_labes
Posting: # 2253
Views: 22,678
 

 Critical review of EU BE guideline (Rev.1), tmax, MRT, LZ

Dear D. Labes!

Statistical analysis, page 13 (lines 504-505)

❝ A nonparametric analysis is not acceptable.


❝ This cannot be discussed seriously without crying.


Join the party. In the past I have evaluated about 300 BE studies nonparametrically (additive model for tmax, multiplicative for AUC and Cmax); ANOVA was only given in an annex essentially to get the CV and have a look whether my sample size assumptions turned out to be justified and have it ready for future studies. Sometimes I was asked to make the ANOVA the primary analysis post hoc. This was the crying phase.
Later on driven by both regulators and sponsors I retreated to:
  • nonparametric analysis for tmax
  • decision tree about the type of analysis based on intra-subject residuals
    • parametric if p(Shapiro-Wilk)>0.05
    • nonparametric if p(Shapiro-Wilk)≤0.05
This was the screaming phase (but covered by ICH-E9):

‘In the choice of statistical methods due attention should be paid to the statistical distribution […]. When making this choice (for example between parametric and non-parametric methods) it is important to bear in mind the need to provide statistical estimates of the size of treatment effects together with confidence intervals […].’


Since regulators hate pretesting (even one of the advocates of nonparametrics in the field of BE, Dieter Hauschke wouldn't do it), right now I'm with nonparametrics for tmax, parametrics for AUC and Cmax. Still I'm looking at the residuals. Quoting Jones and Kenward (Design and Analysis of Cross-Over Trials, 2nd ed. 2003):

“No analysis is complete until the assumptions that have been made in the modeling have been checked. Among the assumptions are that the repeated measurements on each subject are independent, normally distributed random variables with equal variances. Perhaps the most important advantage of formally fitting a linear model is that diagnostic information on the validity of the assumed model can be obtained. These assumptions can be most easily checked by analyzing the residuals.”


In the worst case I present two evaluations; a full data set and another one with the potential outlier removed (only if it’s a failure of the reference). But this is unplanned and stupid anyway.
This is the current phase of hating pragmatism.

But the new Guideline will lift any need for thinking from my side; being part of the flock I only have to follow the shepherd to be happy.

❝ Also note that MRT has disappeared.


Interesting. Maybe, because the Guideline deals only with IR formulations - and the group didn't consider it of any importance? For MR formulations there’s a reference to CPMP/EWP/280/96 included.

❝ Newly mentioned is LambdaZ, which is nearly the same, but on an inverse scale, as half live t1/2.


Yes, but how to deal with it? Strictly speaking, if half lives are different - BE testing does not make sense. For an IR formulation both AUC and Cmax should be influenced only by absorption; elimination is drug-specific. If the apparent elimination is different, it would mean that we are trapped by flip-flop PK, i.e., the late phase of the profile represents absoption rather than elimination. In such a case the main parameter AUCt and the last sampling at 72 hours are debatable.
What should we do now? Come up with some descriptive statistics and :blahblah:, or run a pretest to show there's no difference before proceeding? If yes, what’s the distribution of lambdaZ/t1/2? I used to give the harmonic means* in a table - but I’m not sure about a transformation useful in a comparison…


  • Lam, FC, Hung, CT and DG Perrier
    Estimation of Variance for Harmonic Half-Lives
    J Pharm Sci 74/2, 229–31 (1985)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-10-13 17:38
(5665 d 18:47 ago)

@ d_labes
Posting: # 2523
Views: 22,694
 

 tmax, nonparametrics

Dear DLabes!

Statistical analysis, page 13 (lines 504-505)

A nonparametric analysis is not acceptable.


❝ This cannot be discussed seriously without crying.

❝ Moreover in the text any statistical analysis of tmax has be gone.


Some more background information…
Quoting an e-mail I received from Walter Hauck: “Also interesting that they now say they will not accept non-parametric analyses. That seems a step backwards.”
I talked to a lot of regulators at the EGA-CMP(h)-Symposium in Paris last week and what we already suspected is true: Nonparametrics are evil, therefore tmax had to eliminated from the PK-metrics as well. We all know that tmax is a lousy metric for the rate of absorption, but on the other hand it served well in thousands of studies in the past. Although introduced as a metric for ‘early exposure’ by the http://www.fda.gov/cder/guidance/5356fnl.pdf [image] FDA in 2003, to my knowledge there’s no empirical evidence that this metric performs better than tmax.

Schall and Luus (1992)1 have shown that in the one-compartment open model the difference in tmax, and the ratio of the Cmax/AUC ratio of two drug formulations are equivalent characteristics for the comparison of the absorption rate. In two- or higher compartment models these relationships hold approximately. This provides a powerful argument for the use of the observed Cmax/AUC ratio, rather than tmax, as a mesure for the rate of absorption, because it is well-known that Cmax/AUC can be observed with higher precision than tmax.
Midha et al. (1994)2 suggested that mean AUCtmax may not be a reliable basis for estimation of relative extent [sic!] of absorption because the within-subject variabilities of the partial areas over the first few hours after dose tend to be very high, but decline rapidly to become stable at some point that may fall before or after mean-tmax.
Midha et al. (1993)3 showed that high within-subject variability increases the likelihood of failure of 90% confidence intervals to fall within preset bioequivalence limits in simulations in which the true ratio of geometric means was set at unity.
In a retrospective analysis of 11 BE studies early partial areas showed high variability which stabilized at about 2×tmax, which lead the authors to their often quoted statement: “Once absorption is over, formulation differences no longer apply.”4
  1. R Schall and HG Luus
    Comparison of absorption rates in bioequivalence studies of immediate release drug formulations
    Int J Clin Pharm Ther Toxicol 30/5, 153–9 (1992)
  2. Midha KK, Hubbard JW, Rawson MJ, and L Gavalas
    The application of partial areas in the assessment of rate and extent of absorption in bioequivalence studies of conventional release products: Experimental evidence
    Eur J Pharm Sci 2, 351–63 (1994)
  3. Midha KK, Hubbard JW, Yeung PKF, Ormsby E, McKay G, Hawes EM, Korchinski ED, Gurnsey T, Rawson M, and R Schwedes
    Application of Replicate Design
    In: KK Midha and HH Blume
    Bio-International: Bioavailability, Bioequivalence and Pharmacokinetics
    medpharm Scientific Publishers, Stuttgart, pp 53–68 (1993)
  4. Midha KK, Hubbard JW, and MJ Rawson
    Retrospective evaluation of relative extent of absorption by the use of partial areas under plasma conentration versus time curves in bioequivalence studies on conventional release products
    Eur J Pharm Sci 4, 381–4 (1996)
If partial AUC truncated at reference’s tmax will become a main target parameter (according to statements at the EGA-Symposium no widening of the acceptance range of 0.80-1.25 will be considered), this metric will lead to exceptionally high sample sizes because of its intrinsic variability.

In the following two examples from my database (IR formulations, clinical claim on Cmax/tmax). Both of them showed very low to moderate CVintra on both AUC and Cmax, and were suitably powered in order to be able to demonstrate BE for Cmax.

Example 1:
Original evaluation
Median tmax (ref.) 1.5h
Additive model (nonparametric)
Hodges-Lehman estimate: ±0.00h (100%)
Moses CI: -0.25, +0.25 (85.0%, 115%)
Conclusion: BE
New evaluation
pAUC1.5
Multiplicative model (parametric)
GMR: 90.1%
CI: 75.0%, 110.1%
CVintra 26.4% (AUCt 13.3%, Cmax 17.0%)
Conclusion: not BE

Example 2:
Median tmax (ref.) 1.5h
Additive model (nonparametric)
Hodges-Lehman estimate: +0.26h (116%)
Moses CI: ±0.00, +0.50 (100%, 130%)
Conclusion: not BE
pAUC1.5
Multiplicative model (parametric)
GMR: 66.1%
CI: 53.1%, 82.0%
CVintra 29.7% (AUCt 6.33%, Cmax 9.43%)
Conclusion: not BE

Power to show BE for partial AUC was clearly insufficient, even in example 1. CVintra for the partial AUC was thrice that of Cmax and almost five times that of AUCt in example 2.


Edit: FDA-link corrected to latest archived copy. [Helmut]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-10-14 10:33
(5665 d 01:53 ago)

@ Helmut
Posting: # 2525
Views: 21,504
 

 tmax, nonparametrics

Dear Helmut,

thanks for sharing your experience.

Full ACK with all your points. But what can we do if a regulative authority :no: is missing all the discussions and results in the scientific community?

Our sponsors in most cases take guidances not as such but as unalterable law! They dont like to have deficiency letters only because a letter of a guidance was not followed strictly.

I suspect "We HAVE to be just sheep". Meanwhile I will try to adapt to eat hey ;-).

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-10-14 13:29
(5664 d 22:57 ago)

@ d_labes
Posting: # 2528
Views: 21,643
 

 tmax, nonparametrics

Dear D. Labes!

❝ […] But what can we do if a regulative authority :no: is missing all the discussions and results in the scientific community?


Everybody should actively challenge the draft at conferences, use personal contacts with regulators, and send comments until Jan 31st, 2009 to [email protected] using this M$-Word-template.

❝ Our sponsors in most cases take guidances not as such but as unalterable law! They dont like to have deficiency letters only because a letter of a guidance was not followed strictly.


Yes, I know… I think it's part of our business to explain to them, that they should act right now. Once the guideline is finalized, they will only have two options - if deviating from the guideline, come up with a justification beforehand (probably also aiming for a scientific advice) and cross their fingers, or strictly follow the track (which will result in ridiculous sample sizes, more studies in patients, etc.)…

❝ I suspect "We HAVE to be just sheep". Meanwhile I will try to adapt to eat hey ;-).


You will have to grow a lot of gut since the ratio of lengths (body/gut) is 1:6 for man and 1:24 for sheep. You should change your nutrition to vegetables only, otherwise you will suffer from putrefaction of meat. ;-)

[image]P.S.:
More than two thirds of publications in ‘Biometrics’ and ‘Statistics in Medicine’ are based on Bayesian and/or Nonparametric methods.
I’m really glad that regulators in the BE-fields rely solely on methods based on the t-distribution, introduced exactly 100 years ago by William S. Gosset, aka “A Student of Statistics” (1876-1937). :angry:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-11-21 16:41
(5626 d 18:45 ago)

@ d_labes
Posting: # 2768
Views: 21,788
 

 tmax, Canadian approach

Dear D. Labes,

❝ But what can we do if a regulative authority :no: is missing all the discussions and results in the scientific community?


Sometimes I forget the true country of the brave and the free - Canada...
They already had a Notice to Industry on Rapid Onset Drugs back in 2005.

Eric Ormsby (Health Canada, Therapeutic Products Directorate) in his talk at the BioInternational 2008 presented their approach for a partial areas, which will be implemented in their new guideline, namely:
'The relative mean AUCReftmax of the test to reference formulation should be within 80 to 125%, where AUCReftmax for a test product is defined as the area under the curve to the time of the maximum concentration of the reference product, calculated for each study subject.'
No fiddling around with median tmax, no search for a reference value in the reference's SmPC - just straight out of the study, but:

Example 1:
Original evaluation
Median tmax (ref.) 1.5h
Additive model (nonparametric)
Hodges-Lehman estimate: +0.00h (100%)
Moses CI: -0.25, +0.25 (85.0%, 115%)
Conclusion: BE
New evaluation
Partial AUC individually truncated at tmax of the reference
Multiplicative model (parametric)
GMR: 85.7%
CI: 72.3%, 102.3%
CVintra 23.8% (AUCT 13.3%, Cmax 17.0%)
Conclusion: not BE

Example 2:
Median tmax (ref.) 1.5h
Additive model (nonparametric)
Hodges-Lehman estimate: +0.26h (116%)
Moses CI: +0.00, +0.50 (100%, 130%)
Conclusion: not BE
Partial AUC individually truncated at tmax of the reference
Multiplicative model (parametric)
GMR: 62.4%
CI: 46.2%, 84.3%
CVintra 42.4% (AUCT 6.33%, Cmax 9.43%)
Conclusion: not BE

Power to show BE for partial AUC in the 'Canadian way' was also insufficient, even in example 1 (see this post for another method). CVintra for the partial AUC was five times that of Cmax and almost seven times that of AUCT in example 2. Whatever approach we will choose, IMHO it will be very difficult to show BE for rapid onset drugs if partial areas are a main target parameter due to it's high variability.

--
Edit: Canada doesn't ask for a confidence interval! See this post.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-11-24 14:53
(5623 d 20:33 ago)

@ Helmut
Posting: # 2788
Views: 21,164
 

 tmax, Canadian approach

Dear Helmut,

❝ Sometimes I forget the true country of the brave and the free - Canada...


Brave and free already since a long time ago regarding their outstanding handling of Cmax :ok:.

[...] where AUCReftmax for a test product is defined as the area under the curve to the time of the maximum concentration of the reference product, calculated for each study subject.'

❝ No fiddling around with median tmax, no search for a reference value in the reference's SmPC - just straight out of the study,


This is valuable, but is not easily extended to replicate designs.

but:


❝ Whatever approach we will choose, IMHO it will be very difficult to show BE for rapid onset drugs if partial areas are a main target parameter due to it's high variability.


I wonder why they haven't adopted the Cmax/AUC metric, which is reported to have better properties regarding variability. Especially because L. Endrenyi (frequent speaker on TPD meetings) from Toronto (is in Canada as far as I know ;-) ) is one of its inventors and strong advocats. :ponder:

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2009-02-08 01:44
(5548 d 09:42 ago)

@ d_labes
Posting: # 3205
Views: 20,910
 

 Early exposure, Canadian approach: GMR only!

Dear D. Labes,

❝ ❝ Sometimes I forget the true country of the brave and the free - Canada...


❝ Brave and free already since a long time ago regarding their outstanding handling of Cmax :ok:.


Not to see the wood for the trees!

Though I quoted Eric correctly (the sentence is already given in Health Canada's Notice for Industry for rapid onset drugs from June 2005) I overlooked that nobody asks for a confidence interval. :-D
So only the mean should lie within 0.8-1.25.

Going back to my examples:
first post
Example 1: nonparametric tmax BE, parametric AUCReftmax BE
Example 2: nonparametric tmax not BE, parametric AUCReftmax not BE
second post
Example 3: nonparametric tmax not BE, parametric AUCReftmax BE (though close: GMR 81.2%)

I guess the 'Canadian way' would be an alternative for EMEA's 'statistical experts' not accepting dreadful nonparametrics - and dealing with high variability whilst not loosing their faces.
Hopefully somebody suggested it to EMEA; I realized it just today (one week too late) and suggested in my comments keeping the nonparametric method. :rotfl:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-02-09 09:50
(5547 d 01:35 ago)

@ Helmut
Posting: # 3211
Views: 20,462
 

 Early exposure, Canadian approach: GMR only!

Dear Helmut,

Not to see the wood for the trees!


❝ [...] I overlooked that nobody asks for a confidence interval :-D .So only the mean should lie within 0.8-1.25.


Many thanks for clarifying this point. Regarding the wood and the trees I only could say: Me too. Maybe we are so fixed to the confidence interval approach that we see it lurking behind each corner if bioequivalence limits are given.

BTW: If I read the Notice to Industry for rapid onset drugs, I am not aware what this notice really tells me.
After describing the Cmax and AUCReftmax criteria it is stated:
"Given the current increased precision of analytical methodologies and better methodologies to collect more frequent blood samples, it has been decided that bioequivalence requirements for these drug products need no longer be as stringent as recommended in Report C." ???

What does this mean? Anybody an idea?

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-12-25 16:59
(5592 d 18:27 ago)

@ Helmut
Posting: # 2980
Views: 21,416
 

 tmax; another example

Dear all,

another recent dataset. The SmPC of the reference states median tmax with 2.75 h.

Nonparametric (planned) evaluation:
Median Tmax (ref) 4.8333 h
Hodges-Lehmann estimate -0.1042
92.66% CI: -1.2416, +1.2583 [AR: -0.8120,+0.8120] not BE

Partial AUC truncated at SmPC's (labeled) median tmax (EU?)
GMR: 85.87%
90% CI: 69.15%, 106.6% not BE
CVintra: 31.38%

Partial AUC truncated at median tmax (ref) from study USA (EU?)
GMR: 82.43%
90% CI: 73.28%, 92.71% not BE
CVintra: 16.75%

Partial AUC truncated individually at tmax (ref) - Canada
GMR: 81.21%
90% CI: 70.61%, 93.40% not BE
CVintra: 19.98%

CVintra in the study were 9.24% (AUCt) and 19.6% (Cmax). All methods agree, but variability of partial areas were quite high (I've performed 13 studies with this formulation, so we had enough sampling points planned to properly characterize the peak).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
janmacek
☆    

Czech Republic,
2008-08-27 14:21
(5712 d 22:05 ago)

@ Helmut
Posting: # 2261
Views: 22,440
 

 Automatic integration

Dear colleagues,

another point I would like to discuss deals with chromatography (lines 477-481 and 613-614): "The Applicant should discuss the number of chromatograms (and percentage of total number of chromatograms) that have not been automatically integrated, the reason for a different method of integration, the value obtained with the automatic integration and the non-automatic integration and a justification for the acceptance of each individual chromatograms that has not been automatically integrated."

What is the "automatic integration"? The analyst always has to manually select integration parameters and save them in an integration method. This is not an automatic process (although the software may suggest some initial values), but arbitrary selection based on several sample chromatograms. Then, all samples are integrated using these parameters and (hopefully majority) of chromatograms are correctly integrated, but several others are not (by the way - what is correct integration? - the answer is not trivial with noisy baseline and/or tailing peaks). The chromatographer should inspect every chromatogram and, if needed, correct the integration obtained with pre-selected parameters.

The text in the guideline implies that "automatic integration" is golden standard and manual integration is suspicious, but this is simply not true. The golden standard is correctly integrated chromatogram, regardless whether this integration was obtained with pre-selected integration parameters or with manual integration. The correctness of the integration can be easily proved by the inspector.

The above mentioned requirement will increase workload in the analytical laboratory as for re-integrated chromatograms two copies should be stored in the archive - one with bad integration and the second with correct one. And the analyst will work with two values for a single sample - the incorrect original one and the improved one and this will certainly increase the chance of error. The justification for the acceptance of each individual chromatograms will be always the same - the original integration was incorrect.

I suggest to delete this requirement from the guideline.
Ohlbe
★★★

France,
2008-08-27 16:49
(5712 d 19:37 ago)

@ janmacek
Posting: # 2263
Views: 22,275
 

 Automatic integration

Dear Janmacek,

Oh, I couldn't agree more with you. This requirement is counter-productive, will not meet its objective, will be a pain in the ass for everybody, and nobody has an idea on what the agencies are going to do with it :angry:.

You are right: the "automatic" integration is performed using user-defined parameters anyway. You will always have incorrectly integrated chromatograms, unless you are dealing with high concentrations with very regular peak shapes. It is perfectly legitimate to re-integrate chromatograms, and as you mention, you can't consider "automatic" integration as "gold standard", and manual integration as systematic evil. Some time ago the people from AB/MDS Sciex made a comparison between the 3 integration algorithms in Analyst and manual integration. The conclusion was that manual integration was time consuming and a little less reproducible, but it gave the best results :-P (at least with trained personnel).

I would consider this requirement as counter-productive: the analytical labs will be reluctant to re-integrate chromatograms, in order to reduce the paperwork and because they will be afraid of getting into trouble. The general opinion will be that EMEA does not want chromatograms to be re-integrated (I have already heard a number of CROs say they don't re-integrate chromatograms because FDA does not like it). The consequence will be that we will have concentrations in the reports calculated from poorly integrated whromatograms.

Asking for the initial concentration to be reported in addition to the concentration after re-integration is stupid: if you need to re-integrate the chromatogram, it means that the initial concentration is not reliable ! So why put an unreliable concentration in the report ?

This requirement will not prevent fraud: it will not give any information on whether the new integration is correct or not. The guideline does not require to submit the "old" and "new" integration. And even if the "new" integration was submitted: it can be very difficult to say whether an integration is "correct" or "incorrect", unless it is grossly manipulated. Have the same chromatogram integrated by 10 different analysts, and you will get 10 different integrations, all of which may be acceptable. The real question is whether all chromatograms have been integrated in the same way: standards, QCs, and subject samples. To really check the integration of a single re-integrated chromatogram, you may need to look at all other in the same run.

And anyway if a lab wants to manipulate chromatogram integrations, what difference will the table make ? They may not mention the manipulated integrations in the table, and nobody will see it unless there is an inspection. The table will make no difference from the current situation: inspectors will look at the integrations anyway, whether they have this table or not.

Anyway, what difference does it make if you have 1 %, 5 % or 20 % of re-integrated chromatograms ? 1 % can be enough to validate failing runs. 1 % can be a lot, and 10 % very few, depending on the look of your chromatograms, on your LLOQ and on your background noise.

So this requirement will make no difference for bad labs, and will be hell for the good ones...

❝ The above mentioned requirement will increase workload in the analytical laboratory as for re-integrated chromatograms two copies should be stored in the archive - one with bad integration and the second with correct one.


Well, you're already supposed to do that, if you are using paper raw data. Nothing new here.

Regards
Ohlbe
H_Rotter
★    

Germany,
2008-08-28 13:28
(5711 d 22:58 ago)

@ Ohlbe
Posting: # 2269
Views: 22,085
 

 Manual reintegration!

Dear Jan & Ohlbe,

wonderful, thank you very much for your thoughtful comments! In my experience over-regulation leads to something which is similar to 'regression-to-the-mean' in statistics, namely 'regression-to-mediocricy'.
'Bad' analysts (not 'well trained' following Ohlbe's terminology) will not dare to reintegrate chromatograms any more; 'good' ones (or experienced, if you prefer) will reduce the number of reintegrations in an attempt to find a balance between additional workload and their professional knowledge.

IMHO, in any case the analytical 'quality' will decrease. The section should be deleted entirely - it's simply stupid.

Regards,
Hermann
ElMaestro
★★★

Denmark,
2008-08-28 14:44
(5711 d 21:42 ago)

@ H_Rotter
Posting: # 2272
Views: 22,080
 

 Manual reintegration!

❝ IMHO, in any case the analytical 'quality' will decrease. The section should be deleted entirely - it's simply stupid.


Hi,

there is actually a relevant parallel in the preclinical standard package and certain types of human pharmacology studies. When (new) drugs are evaluated for the potential to provoke torsades-de-pointes (pardon my French!), it is done by ECG's coupled with software for semi-automated evaluation of the QT-intervals. As far as I know, the ECG software automatically finds/calculates/measures the QT-interval, while a technician goes manually through such data to verify the intervals are reasonably accurately identified. In certain cases (s)he may correct the interval but there is no requirement for printing copies of corrected and uncorrected ECG's, afaik. This is pretty much a similar situation, isn't it? It seems to work well for the ECG's that way. I am not aware of any people dropping dead because the software failed and the corrected and uncorrected QT was not printed.

The software companies might love the idea, though. It would give them an incentive to develop options to scan through audit trails, find chromatograms that have been manually integrated and print them out along with some brainy statistics on the impact of the manual corrections, all in a format suited to the European regulator's eye.

But after all, do thoughts about chromatolophystic integration really belong in the BE-guideline? If European regulators really have something important to say about integration of chromatograms for the bioanalysis for inv. of BE, wouldn't it be relevant to assume they would expect the same standards for PK analyses for other drugs than just the BE ones? If yes, then the whole issues does not belong in the BE guideline.
It would be much more relevant to simply write such things in a separate guideline dealing with bioanalysis in a general perspecitve - such a GL is much wanted anyway. All in all, my solution is:
  1. Re-think the integration issues, possibly get inspiration from ICH E14.
  2. Remove the integration issues from the BE guideline.
  3. Get started on a general bioanalysis guideline and include the thoughts about integration there.
EM.
d_labes
★★★

Berlin, Germany,
2008-09-02 13:13
(5706 d 23:13 ago)

@ Helmut
Posting: # 2301
Views: 22,261
 

 Subject accountability, Proc MIXED or equivalent

Dear all,

Subject accountability, page 14 (lines 574-576)
'All treated subjects should be included in the statistical analysis, with the exception of subjects in a crossover trial who do not complete at least one period receiving each of the test and reference products (or who fail to complete the single period in a parallel group trial)'.

This calls ultimately for evaluation with SAS Proc MIXED or equivalent software because the usually employed GLM-ANOVA exclude subjects "listwise" (at least in SAS), i.e. subjects with missing data at any period will be excluded.

As is known in the standard 2x2 cross-over subjects with missing data do not contribute to the ratio or difference of Test versus Reference. Only the variability is affected, usually to a minor extent.

Thus it is not clear why GLM-ANOVA should be abandoned, especially in light of lines 500 ff, which request ANOVA.

Regards,

Detlew
Ohlbe
★★★

France,
2008-09-02 17:20
(5706 d 19:06 ago)

@ d_labes
Posting: # 2304
Views: 22,112
 

 Subject accountability, Proc MIXED or equivalent

Dear DLabes,

❝ As is known in the standard 2x2 cross-over subjects with missing data do not contribute to the ratio or difference of Test versus Reference. Only the variability is affected, usually to a minor extent.


I'm not sure I get your point there. In a standard 2x2 cross-over trial, the new version of the guideline would still allow subjects missing one of the two periods to be excluded.

'All treated subjects should be included in the statistical analysis, with the exception of subjects in a crossover trial who do not complete at least one period receiving each of the test and reference products (or who fail to complete the single period in a parallel group trial)'.


IMHO this would apply in the case of a trial with 3 treatments or more (include subjects for whom you have data for at least one reference product and one test product) or replicate design trials (include subjects who have completed at least one period for the test and one for the reference).

Regards
Ohlbe
d_labes
★★★

Berlin, Germany,
2008-09-02 18:30
(5706 d 17:56 ago)

@ Ohlbe
Posting: # 2306
Views: 21,976
 

 Subject accountability, Proc MIXED or equivalent

Dear Ohlbe,

❝ I'm not sure I get your point there. In a standard 2x2 cross-over trial, the new version of the guideline would still allow subjects missing one of the two periods to be excluded.


OK. Maybe. If one reads the whole sentence in your interpretation.
In understandable words it would mean "... complete at least two periods ..."
(understandable at least for me :-D ).

Regards,

Detlew
d_labes
★★★

Berlin, Germany,
2008-09-03 10:35
(5706 d 01:51 ago)

@ Helmut
Posting: # 2308
Views: 22,952
 

 partial AUC, early exposure

Dear all!

Characteristics to be investigated, page 9 (lines 313-317)
'For products where rapid absorption is of importance, partial AUCs can be used as a measure of early exposure. The partial area can in most cases be truncated at the population median of tmax values for the reference formulation. However, an alternative time point for truncating the partial AUC can be used when clinically relevant. The time point for truncating the partial AUC should be pre-specified and justified in the study protocol.'

This is nearly the same sentence then in the FDA guidance.
It seems partly that this was introduced to get rid of tmax and nonparametric evaluation of that metric.

I wonder how we can interpret "... truncated at population median ...".
What do you think what the population median is?

Helmut has discussed this in the context of data deletion due to emesis in this post.
If I understood his opinion right, the population median is the sample median of the reference formulation (study specific).

But the DRAFT calls for a prespecified truncation point. This I interpret as not study specific but specific for the reference formulation or from clinical relevance (whatever this means).
Where can we get a value for that ominous population median of tmax values for the reference? :ponder:

Next question in that context is: Who decides for which product rapid absorption is important and therefore partial Area should be evaluated?

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-23 14:52
(5685 d 21:34 ago)

@ Helmut
Posting: # 2388
Views: 22,334
 

 HVDs?

[image]Dear all!

another one from the 'Far Side':

Standard design, page 5 (lines 148-151)
'If two formulations are going to be compared, a two-period, two-sequence single dose crossover design is the design of choice. The treatment periods should be separated by an adequate wash out period.'
Alternative designs, page 6 (lines 172-175)
'Under certain circumstances, provided the study design and the statistical analyses are scientifically sound, alternative well-established designs could be considered such as [...] replicate designs e.g. for substances with highly variable pharmacokinetic characteristics (see section 4.1.10).'
Number of subjects, page 7 (lines 223-224)
'The number of subjects to be included in the study should be based on an appropriate sample size calculation. The minimum number of subjects in a cross-over study should be 12.'
Acceptance limits, page 14 (lines 561-562)
'[...] Moreover, for highly variable drugs the acceptance interval for Cmax may in certain cases be widened (see section 4.1.10).'
4.1.10 Highly variable drugs or drug products, page 16 (lines 631-641)
'In certain cases, Cmax is of less importance for clinical efficacy and safety compared with AUC. When this is applicable, the acceptance criteria for Cmax can be widened to 75-133% provided that all of the following are fulfilled:
  • the widening has been prospectively defined in the study protocol
  • it has been prospectively justified that widening of the acceptance criteria for Cmax does not affect clinical efficacy or safety
  • the bioequivalence study is of a replicate design where it has been demonstrated that the within-subject variability for Cmax of the reference compound in the study is >30%.
This approach does not apply to AUC.
It is acceptable to apply either a 3-period or a 4-period crossover scheme in the replicate design study.'


OK, in my understanding (1) and (2) of 4.1.10 were already stated in the current NfG (2001) - although (2) was not mandatory according to the Q&A-document (CV>30% was enough), the second part of (3) had already to be demonstrated in a replicate design pilot study (Q&A) - but what do we gain from the first part of (3) = mandatory replicate design?
If scaling is not allowed, it makes no difference in terms of power to run a study in a 2x2 design in 96 subjects or in a 2x4 replicate design in 48 subjects. We only increase the chance of drop-outs due to events in the washout phases (3 as compared to 1).
Example: We assume a dropout rate of 10% / washout. In the 2x2 study in 96 subjects this means 96+10=106 (212 treatments), in the 2x4 study in 48 subjects: 48+16=64 (256 treatments). Ethically?
From a practical point of view a replicate design offers some insights on within-subject variability, but we are not allowed to exclude outliers anyway. So why all that fuzz?
One of the authors told me that he will have no problems in accepting a study combining 'everything' in one pot: replicate pilot -> HVD demonstrated -> second (BE) phase -> pooling. Is this the intention? Nice, but I would be glad if someone would tell me a valid statistical procedure (no, Bonferroni most likely will not do the job).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-09-23 16:20
(5685 d 20:06 ago)

@ Helmut
Posting: # 2389
Views: 21,793
 

 HVDs?

Dear HS!

I think your confusion is a consequence of "scientific sound" and "well-established alternatives" which in "certain cases" are "adequate" or "appropriate" and "may be" :-D .

OK, the regulators want to see the variability under reference treatment to judge it is a HVD. And this can only be shown in a replicate design.
Although one has to be cautious: In my experience models with same variability of test and reference often fit the data of replicate studies as well.

But if it is HVD, it has no consequences! No scaling, but only a marginal widening of the bioequivalence range, appropriate only to moderate variability.

Moreover their reasoning is not very "scientifically sound": If Cmax has no importance for clinical efficacy and safety, why to have a BE limit to meet at all?

And if we talk about HVDs, why is an adjustment for AUC not allowed? We all know HVDs with high variability in AUC also. And there are brand names of them on the market!
With this guidance (if it comes unchanged) it will be very hard to generic substitute them (only studies with hundreds of volunteers will do it) :-( .

To my humble opinion this part of the DRAFT totally neglects all the discussions about HVDs in the scientific community and in the USA and Canada regulatory bodies. This will lead in the near future to diverging guidelines I guess. It's a pity.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2009-01-20 15:33
(5566 d 19:53 ago)

@ Helmut
Posting: # 3074
Views: 20,796
 

 HVDs?

Dear all!

4.1.10 Highly variable drugs or drug products, page 16 (lines 631-641)


According to discussions at last week's EUFEPS workshop members of EMEA's PK-group unanimously expressed their point of view as following:
If the reference formulation is suspected to be a HVDP and no clinical reasons speak against widening of the acceptance range for Cmax the pivotal BE-study may be started right away in a replicate design (no pilot to demonstrate a HVDP reference followed by a pivotal study to show BE). It is unacceptable to run a two-period study of the reference alone, because regulators suspect that the performance of such a study will be intentionally sloppy in order to obtain a high CV.
The study should be powered to demonstrate BE for the conventional BE-range if CV<30%. If in the study CV>30% the acceptance range may be widened to 75%-133%.
Example (expected PE 0.95, 80% power):
CV%    2-way       3-way replicate       4-way replicate
      80%-125%   80%-125%   75%-133%   80%-125%   75%-133%
 30   40 ( 80)   30 ( 90)   18 ( 54)   20 ( 80)   12 ( 48)
 40   66 (132)   50 (150)   28 ( 84)   34 (136)   20 ( 80)
 50   98 (196)   74 (222)   42 (126)   50 (200)   28 (112)
 60  134 (268)  102 (306)   56 (168)   68 (272)   38 (152)
 70  174 (348)  130 (390)   72 (216)   88 (352)   48 (192)
 80  214 (428)  162 (486)   90 (270)  108 (432)   60 (240)
 90  258 (516)  194 (582)  108 (324)  130 (520)   72 (288)
100  300 (600)  226 (678)  124 (372)  150 (600)   84 (336)

The table gives samples sizes and number of treatments (which are a rough estimate of study costs). Though more treatments are needed in 3-way design, one must expect higher drop-out rates in 4-way design.
The draft defines (line 639) the limit of a HVDP with CV>30%, whereas generally a cut-off of CV≥30% is used. Hopefully this definition will change in the final version.

Going deeper in the example: You suspect that the CV will be 40%. Since you are not sure, you should power your study for 80%-125% (CV<30%). The sample sizes will be 50 (150 treatments) in a 3-way replicate and 34 (136 treatments) in a 4-way replicate. If it turns out that CV=30% you are powered with 95% to demonstrate BE within the conventional range - since you (mis-)used 50 subjects instead of the needed 30. If CV=40% as expected you may widen the acceptance range and are powered with 99.8% (or for 80% power your PE may lie with 85.4%-117.1%). You may also have performed the study as a conventional 2x2 in 66 subjects (with fewer treatments). IMHO the only gain is the larger allowed deviation from the reference. I'm not sure whether this was the primary intention of the 'inventors' or maybe I'm misunderstanding something terribly...

P.S.: In the US with RSABE you probably would perform the study as a 3-way replicate design in 34 subjects independent from the CV...

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2009-03-06 18:21
(5521 d 17:05 ago)

@ Helmut
Posting: # 3335
Views: 20,574
 

 E.g., omeprazole

Dear all!

❝ […] but we are not allowed to exclude outliers anyway.


Really?
According to the recent Q&A-document:

'[…] erratic concentration profiles (either non-existing or extremely delayed) can be obtained. Therefore the sampling period should be designed such that measurable concentrations are obtained, taking into consideration not only the half-life of the drug but the possible occurrence of this effect as well. This should reduce the risk of obtaining incomplete concentration-time profiles due to delay to the most possible extent. These effects are highly dependent on individual behaviour. Therefore, but only under the conditions that sampling times are designed to identify very delayed absorption and that the incidence of this outlier behaviour is observed with a comparable frequency in both, test and reference products, these incomplete profiles can be excluded from statistical analysis provided that it has been considered in the study protocol.'

My emphases.

In other words, if you see an outlier after the reference, cross your fingers to have another one after the test as well. That's true equivalence: a bad test to a bad reference.
Or should we do some statistics? Which one? Maybe Fisher’s exact test? For 24 subjects it will need 5 outliers in one formulation to get a significant difference (p 0.0496) to none in the other formulation…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-25 17:48
(5683 d 18:38 ago)

@ Helmut
Posting: # 2408
Views: 21,984
 

 BCS-based Biowaiver

Dear all!

BCS-based Biowaiver, IV.1.1 General aspects, page 26 (lines 985-990)
'Comparative in vitro dissolution experiments should follow current compendial standards. [...] Usual experimental conditions are e.g.: [...]
  • Volume of dissolution medium: 500 ml'
No, usual compendial standards are not 500 ml, but 250 ml! We know that guidelines easily are interpreted as laws, both by regulators and the industry - what now? Follow pharmacopeias and perform the test in 250 ml (and rely on the 'e.g.' before 500 ml), or discard all established methods and develop new ones in 500 ml media?
BTW, 500 ml are in contradiction to all (?) references published on the BCS...

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-10-13 14:43
(5665 d 21:42 ago)

@ Helmut
Posting: # 2522
Views: 21,705
 

 Dissolution update

Dear all!

The Draft was presented by a representative of the Swedish Authority at the Dissolution/BA/BE-Conference in Prague last week. When asked about the '500 ml' she replied 'Oh, I did not realized that; I think, that's a typing error.' On the other hand I heard about deficiency letters already issued by the Spanish Authority to two different companies asking for dissolution data 'compliant with 500 ml'...
Quoting this issue at the EGA-CMP(h)-Symposium in Paris gained a lot of laughter. Everybody should be aware that only two documents are binding, namely the NfG on BA/BE (2001) and the Q&A (2006) until release of the final version of the BE-Guideline, which is expected for the late autumn 2009 at the earliest.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-12-23 18:03
(5594 d 17:23 ago)

@ Helmut
Posting: # 2964
Views: 21,046
 

 EUFEPS workshop

Dear all,

see this post for an workshop organized by the EUFEPS.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,986 posts in 4,823 threads, 1,663 registered users;
91 visitors (1 registered, 90 guests [including 6 identified bots]).
Forum time: 12:26 CEST (Europe/Vienna)

Art is “I”; science is “we”.    Claude Bernard

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5