Irregular profiles [Software]
❝ Yes, this is always a problematic situation, and we could discuss it for hours.
Sure. Do you have some spare time?
❝ […] treating [the value at 8 hours] as missing will mean that you just ignore your experimental values when you don't like them.
I would not call this ‘disliking’. There should be an SOP in place where data are reviewed in a blinded manner by an experienced pharmacokineticist and a request for reanalysis should be initiated. The code
'Missing
' WinNonlin uses to skip a datapoint is a little bit unfortunate. If a value is really missing (e.g., vial broken), WinNonlin interpolates linear between adjacent values (little bias in the absorption phase and a positive bias in the distribution/elimination phase). Kinetica has the option to interpolate logarithmically between data points (see this post). Therefore an SOP for data imputation (regardless the software used) is not a bad idea. In Simon’s example (values close to the LLOQ) it doesn’t make a big difference if the outcome of the study is concerned, but I have seen examples where a BQL-value (‘embedded’ in Simon’s terminology) was obtained even after repeated analysis in between two values far above the LLOQ. More on this at the end of this post.❝ This can lead to heated discussions with regulators, which can be much less comfortable than posting on a forum.
I’m sure you are right about heated discussions. But on the other hand why did it never happen to me? Pure chance or because I always state what is planned not only in an SOP but also in the protocol? I’m guessing that a lot of troubles are originating from all types of post-hoc ‘decisions’. It’s important to start such a plausibility review if a reasonable percentage of subjects are already analysed (my SOP calls for ≥⅓). It does not make sense to initiate a repeated analysis after the first subject just to find out that profiles with a highly variable time course are ‘normal’ for this particular drug/formulation/method. Such an overview of data is related to this thread about reliability in estimating the elimination.
❝ If the next concentration is just above LLOQ, this can be due to analytical variability. But if it is much higher, I'd first try to understand what's happening: sample inversion, problem at the clinic, analytical error, anything. Even if it has no impact on this study, it can help prevent the recurrence of the problem.
100% agree. We discussed a similar point in this post. We still reanalyse not only the ‘suspect value’, but neighbouring values as well. These values serve as internal validators. If they are within some predefined limits (I don’t want to give the entire SOP here – actually the limits are dependent on the inter-day variability of the method plus a little statistics) we can be more confident in accepting or rejecting the suspect value. Reanalysing a single value (even as replicates) IMHO is a flawed concept. Why should the second analysis be ‘better’ than the first one?
In your response you stated:
❝ We used to do this in a lab where I worked some years ago, but I can't remember seeing it done at any of the labs I have visited since.
Repeating single values reminds me on lab-values. A subject shows a value outside the normal range in the post-study evaluation and a follow-up is initiated. The second one is within the normal range. Great, but why do we accept the second one as the ‘true one’? Only because we ‘like’ it more? Or is the second one by definition ‘better’ than the first one?
❝ IMHO it is a real pity that regulators are allergic to "PK repeats".
Are they? Is it because repeats are generally badly documented or do you think that there are other reasons?
We should not forget that it’s always possible that something we cannot control may have happened to a particular sample preventing a ‘regular’ result. A stabiliser was forgotten to be added to the blood sample, the blood sample was not put on ice, or even the stopper of the vacutainer was contaminated with some nasty stuff degrading our analyte. All these cases will lead to a value (even if repeated) which does not fit the profile (FDA invented the term ‘irregular profile’ for such a case). IMHO it does not make sense to stick to the experimental result if it's simply not plausible. Such a value should be dropped. Maybe Pharsight should consider a code for such a value ('
Rejected
' instead of 'Missing
'?). Numerically it could be handled like a missing one – but it should be clear in the output that an experimental result exists.Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- BLQ values in Winnonlin Dr. Harish L. Rao 2009-09-10 07:53 [Software]
- BLQ values in Winnonlin Ohlbe 2009-09-11 00:18
- BLQ values in Winnonlin > Status codes tool SDavis 2009-09-11 08:13
- BLQ values in Winnonlin > Status codes tool Ohlbe 2009-09-11 22:19
- Irregular profilesHelmut 2009-09-14 17:11
- About "PK repeats" Ohlbe 2009-09-14 22:01
- About "PK repeats" Helmut 2010-10-21 15:51
- About "PK repeats" Ohlbe 2010-10-21 16:38
- About "PK repeats" Helmut 2010-10-21 17:01
- About "PK repeats" Ohlbe 2010-10-21 17:32
- slowly going off-topic Helmut 2010-10-21 18:47
- About "PK repeats" Ohlbe 2010-10-21 17:32
- About "PK repeats" Helmut 2010-10-21 17:01
- About "PK repeats" ElMaestro 2010-10-21 16:56
- About "PK repeats" Helmut 2010-10-21 18:02
- About "PK repeats" Ohlbe 2010-10-21 16:38
- About "PK repeats" Helmut 2010-10-21 15:51
- BLQ Values in Winnonlin Dr. Harish L. Rao 2009-09-15 05:45
- About "PK repeats" Ohlbe 2009-09-14 22:01
- Irregular profilesHelmut 2009-09-14 17:11
- BLQ values in Winnonlin > Status codes tool Ohlbe 2009-09-11 22:19
- BLQ values in Winnonlin > Status codes tool SDavis 2009-09-11 08:13
- BLQ values in Winnonlin Ohlbe 2009-09-11 00:18