Between study variability common for HVDs [Regulatives / Guidelines]
Dear HS,
I know what you mean, but I think we are talking two different phenomena here.
If there truly is b2b variation, then this is a module 3 issue. However, there could be an -if I may call it so- "apparent" b2b variation that reflects stochastic phenomena even though the batches are not varying (or there could be both, shrug). Variation is evil and... well... unpredictable. The burden of giving a proof that relates to the latter phenomenon is almost impossible, however, for the former the task is somewhat easier and more practical. So what Dan can do is to provide something quite concrete in the form of batch data here in stead of entering the discussion around the possibility of random noise accounting for something that could be interpreted as b2b variation.
In another field of equivalence for pharmaceuticals, the closely related topic of batch selection is slowly becoming a hot topic. I think it will become hot for this type of BE as well. Regulators know of the existence of batch selection, and the potential for variation between batches. In the EU guidance we have "Unless otherwise justified, the assayed content of the batch used as test product should not differ more than 5% from that of the batch used as reference product determined with the test procedure proposed for routine quality testing of the test product."
IOW, the choice of batches can affect a study's outcome. In that situation, a lot of companies would be interested in knowing in which way they can use this as an argument for saving a failed BE study: "We selected the wrong batch so our study failed. We will conduct a new one with another batch" And who can blame them? The EU guideline specifies that one cannot just neglect the presence of a failed study. I think regulators have actively avoided discussing this issue in depth, in part because the acknowledgment of batch selection being key to a succesful BE study conflicts with a common way of interpreting success in BE:
Product A is bioequivalent with product B means one batch of T has been shown to be BE to one batch of R. But it does not imply that all batches of T are (or would be in a study) BE by the regulatory standard to all batches of R; nevertheless the latter is a common way for lay people to view it.
Add to that some alpha and beta and you have the perfect legal storm brewing; most EU assessors would rather have plague than enter this discussion before a judge. And who can blame them?
❝ Not necessarily. Dan stated in his post that the drug is highly variable. It’s a common property of HVDs that not only the variance is high, but also the location of the T/R-ratio may vary across studies.
I know what you mean, but I think we are talking two different phenomena here.
If there truly is b2b variation, then this is a module 3 issue. However, there could be an -if I may call it so- "apparent" b2b variation that reflects stochastic phenomena even though the batches are not varying (or there could be both, shrug). Variation is evil and... well... unpredictable. The burden of giving a proof that relates to the latter phenomenon is almost impossible, however, for the former the task is somewhat easier and more practical. So what Dan can do is to provide something quite concrete in the form of batch data here in stead of entering the discussion around the possibility of random noise accounting for something that could be interpreted as b2b variation.
In another field of equivalence for pharmaceuticals, the closely related topic of batch selection is slowly becoming a hot topic. I think it will become hot for this type of BE as well. Regulators know of the existence of batch selection, and the potential for variation between batches. In the EU guidance we have "Unless otherwise justified, the assayed content of the batch used as test product should not differ more than 5% from that of the batch used as reference product determined with the test procedure proposed for routine quality testing of the test product."
IOW, the choice of batches can affect a study's outcome. In that situation, a lot of companies would be interested in knowing in which way they can use this as an argument for saving a failed BE study: "We selected the wrong batch so our study failed. We will conduct a new one with another batch" And who can blame them? The EU guideline specifies that one cannot just neglect the presence of a failed study. I think regulators have actively avoided discussing this issue in depth, in part because the acknowledgment of batch selection being key to a succesful BE study conflicts with a common way of interpreting success in BE:
Product A is bioequivalent with product B means one batch of T has been shown to be BE to one batch of R. But it does not imply that all batches of T are (or would be in a study) BE by the regulatory standard to all batches of R; nevertheless the latter is a common way for lay people to view it.
Add to that some alpha and beta and you have the perfect legal storm brewing; most EU assessors would rather have plague than enter this discussion before a judge. And who can blame them?
—
Pass or fail!
ElMaestro
Pass or fail!
ElMaestro
Complete thread:
- inter-batch variability? Dr_Dan 2010-08-04 10:29 [Regulatives / Guidelines]
- inter-batch variability? Pavidus 2010-08-04 11:57
- inter-batch variability? d_labes 2010-08-04 13:58
- inter-batch variability? ElMaestro 2010-08-04 17:09
- Between study variability common for HVDs Helmut 2010-08-04 19:45
- Between study variability common for HVDsElMaestro 2010-08-04 21:01
- Representative batches? Helmut 2010-08-04 23:42
- Representative batches? ElMaestro 2010-08-05 08:40
- Representative batches? Helmut 2010-08-05 12:30
- Representative batches? Dr_Dan 2010-08-05 08:58
- Representative batches? Helmut 2010-08-05 12:45
- Confidence intervals vs. point estimators Dr_Dan 2010-08-06 09:55
- Confidence intervals vs. point estimators ElMaestro 2010-08-06 12:34
- Confidence intervals vs. point estimates Helmut 2010-08-06 13:20
- Confidence intervals vs. point estimators Dr_Dan 2010-08-06 14:44
- Confidence intervals vs. point estimators ElMaestro 2010-08-06 15:01
- meta analysis? martin 2010-08-06 17:25
- meta analysis? ElMaestro 2010-08-06 17:57
- meta analysis? Helmut 2010-08-06 18:31
- meta analysis? Ohlbe 2010-08-06 23:21
- No chance against RMS? Dr_Dan 2010-08-10 12:27
- No chance against RMS? ElMaestro 2010-08-10 16:26
- No chance against RMS? Dr_Dan 2010-08-10 12:27
- meta analysis? ElMaestro 2010-08-06 17:57
- meta analysis? martin 2010-08-06 17:25
- Confidence intervals vs. point estimators ElMaestro 2010-08-06 15:01
- Confidence intervals vs. point estimators ElMaestro 2010-08-06 12:34
- Confidence intervals vs. point estimators Dr_Dan 2010-08-06 09:55
- Representative batches? Helmut 2010-08-05 12:45
- Representative batches? ElMaestro 2010-08-05 08:40
- Representative batches? Helmut 2010-08-04 23:42
- Between study variability common for HVDsElMaestro 2010-08-04 21:01
- Batch-to-Batch Pharmacokinetic Variability kumarnaidu 2016-07-20 07:16
- tlast (Common) Helmut 2016-07-20 10:48
- tlast (Common) nobody 2019-02-21 15:20
- tlast (Common) ElMaestro 2019-02-21 16:32
- tlast (Common) nobody 2019-02-21 17:02
- tlast (Common) ElMaestro 2019-02-21 18:02
- tlast (Common) nobody 2019-02-21 18:17
- tlast (Common) ElMaestro 2019-02-21 18:02
- tlast (Common) nobody 2019-02-21 17:02
- tlast (Common) ElMaestro 2019-02-21 16:32
- tlast (Common) nobody 2019-02-21 15:20
- tlast (Common) Helmut 2016-07-20 10:48
- Between study variability common for HVDs Helmut 2010-08-04 19:45