Shuanghe
★★  

Spain,
2020-05-26 12:56
(155 d 01:08 ago)

Posting: # 21469
Views: 5,467
 

 An example of IS response. Need your helps. [Bioanalytics]

Dear All,

Recently we did a BE study and among dozens of analytical runs, there's a run where internal standard showed inconsistent response, as shown in the following figure.

[image]

I removed several double blank samples where IS responses are zero. Apart from that, all samples in the run are included, where unknown samples are from 3 subjects. IS responses of unknown and QC after around 100th injection start to climb upwards.

When asked why this run was accepted, the CRO said that the run fulfilled the criteria defined in their SOP so the run is acceptable and there is no need of repeat analysis.

Now, the product is shitty so we are far away from BE criteria. Therefore it really doesn't matter what are the true PK values for those subjects. Repeated or not, there is no way the study conclusion will be changed. New formulation will be developed.

However, from pure bioanalytical point of view, in my opinion, there's clearly something wrong here. If runs like this are deemed acceptable according to SOP and not even trig an investigation, I don't care what the creteria is, the SOP may need to be revised. At the least, the samples from 2nd half of the run should be repeated. I would repeat the whole run.

What's your opinion/experience on that? Any suggestions? Thanks.

All the best,
Shuanghe
Sukalpa Biswas
☆    

India,
2020-05-26 13:49
(155 d 00:15 ago)

(edited by Sukalpa Biswas on 2020-05-26 14:05)
@ Shuanghe
Posting: # 21471
Views: 4,585
 

 An example of IS response. Need your helps.

» [image]

Clearly from that graph the instrument response has been increased gradually over a period of time. I think the acceptance criteria is +/- 25% of the IS response of the average of accepted CC and QC. The criteria fulfilled because the response of IS in CC and QC increased along with study samples, as per SOP.
There might be many reasons behind the enhancement of the response of IS,
  1. System may not stabilized while submitting the batch.
  2. May be column is not saturated enough.
  3. May be the category of the molecule require more stability runs to stabilize. etc.
But there is always a possibility to investigate the reason of the enhancement of the response of IS. The particular batch can be repeated under investigation with proper documentation.
But personally speaking, I have come across this type of scenarios in the past. But possibilities of the change in concentrations are less, because we determine concentrations based on the area ratio. Looking at the data, I think the response of IS and analyte increased proportionally as the QC samples are passed resulting a successful analytical run.


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5[Helmut]
Shuanghe
★★  

Spain,
2020-05-26 16:32
(154 d 21:32 ago)

@ Sukalpa Biswas
Posting: # 21476
Views: 4,554
 

 An example of IS response. Need your helps.

Dear Sukalpa,

Many thanks for your comments.

» ...The criteria fulfilled because the response of IS in CC and QC increased along with study samples, as per SOP.

» ...

» But personally speaking, I have come across this type of scenarios in the past. But possibilities of the change in concentrations are less, because we determine concentrations based on the area ratio.

I don't doubt that the run fulfill certain criteria with certain nemerical values according to CRO's current SOP. What I'm not comfortable about is the fact that runs like this, with such obvious trends of response deviation, were not investigated. It's more about the studies in future instead of the past.

» Looking at the data, I think the response of IS and analyte increased proportionally as the QC samples are passed resulting a successful analytical run.

Well, with the above figure in fornt of me, I wouldn't say this was a seccessful run just because it pass certain numeric values defined in the current SOP. Given the example like this happened, I would rather question if the current SOP is reasonable enough.

All the best,
Shuanghe
Ohlbe
★★★

France,
2020-05-26 17:46
(154 d 20:17 ago)

@ Sukalpa Biswas
Posting: # 21480
Views: 4,545
 

 An example of IS response. Need your helps.

Dear Sukalpa,

» There might be many reasons behind the enhancement of the response of IS,
» 1. System may not stabilized while submitting the batch.
» 2. May be column is not saturated enough.
» 3. May be the category of the molecule require more stability runs to stabilize. etc.

Could be, though not very likely if this is the only affected run amongst many others, and as the gradual increase is only seen after 100 injections (I would have agreed with you if after 20-30 injections, and stabilising afterwards at a new level).

» But there is always a possibility to investigate the reason of the enhancement of the response of IS.

Absolutely, and like Shuanghe, I would have liked to see it done here.

» But possibilities of the change in concentrations are less, because we determine concentrations based on the area ratio.

Yes. And this triggers several questions: are the analyte and the IS affected in the same proportion, at all levels of concentration (in which case the accuracy will be maintained), or not ? Is there a risk of detector saturation, due to the increased response ? What about ion suppression of the analyte by the IS and vice-versa ? What about the linearity of the response and of the calibration curve ?

» Looking at the data, I think the response of IS and analyte increased proportionally as the QC samples are passed resulting a successful analytical run.

Well, Shuanghe had not provided this information yet when you posted this message. All we knew was that the run passed, meaning that at least 67% of the QCs passed, including 50% at each level of concentration. Looking at the number of QCs and how they are spread over the run, all QCs in the affected region could have failed and the run would have remained acceptable. What made you say that the analyte increased proportionally ?

Such variations in IS can be perfectly fine, without any influence on the reliability of the data. In other cases, they can ruin your results, even with a stable isotope labelled IS. You don't know until you understand what's going on, or until you demonstrate there is no impact.

Regards
Ohlbe
Ohlbe
★★★

France,
2020-05-26 14:42
(154 d 23:22 ago)

@ Shuanghe
Posting: # 21472
Views: 4,579
 

 An example of IS response. Need your helps.

Dear Shuanghe,

» However, from pure bioanalytical point of view, in my opinion, there's clearly something wrong here.

Agreed.

» If runs like this are deemed acceptable according to SOP and not even trig an investigation, I don't care what the creteria is, the SOP may need to be revised.

Agreed.

» At the least, the samples from 2nd half of the run should be repeated. I would repeat the whole run.

I would try and understand what happened first, and look at the whole set of information very closely. Anything else that varied here, such as retention times ?

It doesn't look like a subject-specific phenomenon: the QC samples are affected the same way. By the way, how are the results of the last 6 QC samples ? If even the last ones are passing, you could conclude that whatever is happening does not appear to have a large effect on accuracy. I'd still be concerned, until I understand what's going on. The calibration curve is not repeated at the end of the run, so there is no way to check what's happening at the ULOQ, where you may get linearity issues due to detector saturation or ion suppression of the analyte by the IS.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-05-26 15:35
(154 d 22:29 ago)

@ Ohlbe
Posting: # 21473
Views: 4,567
 

 Placement of CC

Dear Ohlbe & Shuanghe,

I agree with all of your points.

» The calibration curve is not repeated at the end of the run, so there is no way to check what's happening at the ULOQ, where you may get linearity issues due to detector saturation or ion suppression of the analyte by the IS.

Yep. Many CROs have the CC in duplicates only at the start of the run. IMHO, that should be avoided. I prefer to have one set of singlets at the start and another at the end.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Shuanghe
★★  

Spain,
2020-05-26 16:32
(154 d 21:32 ago)

@ Helmut
Posting: # 21477
Views: 4,553
 

 An example of IS response.

Dear Helmut and Ohlbe,

Many thanks for your response.

» Anything else that varied here, such as retention times ?

As far as I can tell from the excel file that was exported from Analyst, retention times are very consistent.

» By the way, how are the results of the last 6 QC samples ?

The last 6 QCs are high/medium/low/high/medium/low QCs. ULOQ is 3500 ng/ml and HQC is 2660 ng/ml, 76% of ULOQ. Actually, last 6 QCs all passed accuracy criterion with the range 93–95%. There were 2 failed QCs in the early injections (1 HQC around 82% and 1 LQC just lower than 85%).

» If even the last ones are passing, you could conclude that whatever is happening does not appear to have a large effect on accuracy. I'd still be concerned, ...

Yes, no large effect on accuracy for this study is expected. But as I said, I am not really concerned for this study since the formulation is crappy anyway and we are developing the new one as we speak.

What concerns me is those studies in the future. Certainly I don't want the studies conducted with this same SOP in force where runs such as the one above being accepted without any further investigation.

» » The calibration curve is not repeated at the end of the run, so there is no way to check what's happening at the ULOQ, where you may get linearity issues due to detector saturation or ion suppression of the analyte by the IS.
»
» Yep. Many CROs have the CC in duplicates only at the start of the run. IMHO, that should be avoided. I prefer to have one set of singlets at the start and another at the end.


I would make a suggestion to the CRO for the modification of their SOP to take incidence like this into consideration. One way is like you said, repeat another set of CC at the end. What else can we do to improve it?

All the best,
Shuanghe
Ohlbe
★★★

France,
2020-05-26 17:27
(154 d 20:37 ago)

@ Shuanghe
Posting: # 21478
Views: 4,538
 

 An example of IS response.

Dear Shuanghe,

» I would make a suggestion to the CRO for the modification of their SOP to take incidence like this into consideration. One way is like you said, repeat another set of CC at the end. What else can we do to improve it?

They should read the FDA Q&A, but it does not help much here (there are some QC samples with an IS response tracking that of the subject samples).

They should also look at the special issue of Bioanalysis on this topic. It includes papers from both regulators and industry, with nice case studies. The overall conclusion is: acceptance criteria such as mean ± x% are useful, but not in all situations. Plotting the IS response to look for trends and systematic differences is a must.

Also a nice review, just published in the same journal, free access :-).

Regards
Ohlbe
Ohlbe
★★★

France,
2020-05-26 17:32
(154 d 20:32 ago)

@ Helmut
Posting: # 21479
Views: 4,538
 

 Placement of CC

Dear Helmut,

» Many CROs have the CC in duplicates only at the start of the run. IMHO, that should be avoided. I prefer to have one set of singlets at the start and another at the end.

Really ? I usually see only singlets, not duplicates (in the past many labs analysed the LLOQ and ULOQ samples in duplicate, with only one of the results being used following an SOP, but all others were singlets). If duplicate samples are used, it is indeed rather strange to put both at the beginning.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-05-26 19:03
(154 d 19:01 ago)

@ Ohlbe
Posting: # 21481
Views: 4,515
 

 Placement of CC

Dear Ohlbe,

» Really ?

Really.

» I usually see only singlets, not duplicates …

That’s risky. If a calibrator at the LLOQ or ULOQ is out of range, you are in trouble.

» (in the past many labs analysed the LLOQ and ULOQ samples in duplicate, with only one of the results being used following an SOP, but all others were singlets).

I see. How does such an SOP look like? Pick out the best?

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2020-05-26 19:43
(154 d 18:21 ago)

@ Helmut
Posting: # 21482
Views: 4,519
 

 Duplicate LLOQ and ULOQ

Dear Helmut,

» » (in the past many labs analysed the LLOQ and ULOQ samples in duplicate, with only one of the results being used following an SOP, but all others were singlets).
»
» I see. How does such an SOP look like? Pick out the best?

Some just picked the one they liked best. I heard they ran into trouble :-D

Most frequent: use the first result only. If, and only if it fails (more than 20% deviation at the LLOQ, 15% at the ULOQ): exclude it and use the second. The aim being of course to avoid having to truncate the curve in case of failure.

I see this less and less often. Not sure why (guess: some FDA inspectors did not like it).

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-05-27 16:30
(153 d 21:33 ago)

@ Ohlbe
Posting: # 21484
Views: 4,378
 

 Duplicate LLOQ and ULOQ

Dear Ohlbe,

» Most frequent: use the first result only. If, and only if it fails (more than 20% deviation at the LLOQ, 15% at the ULOQ): exclude it and use the second. The aim being of course to avoid having to truncate the curve in case of failure.
»
» I see this less and less often. Not sure why (guess: some FDA inspectors did not like it).

Count me in. If you analyzed duplicates, use them.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Belgium?,
2020-05-26 15:44
(154 d 22:20 ago)

@ Shuanghe
Posting: # 21474
Views: 4,555
 

 An example of IS response. Need your helps.

It is a great example of something being or going out of whack, thanks, Shuanghe.

Perhaps the plot shows why it is such a good idea to use an IS. It saves the day.
And yes, that run may be acceptable by many standards for exactly that reason.

Yet, it is also appearing sick to the bone.

It may be that instrument sensitivity varies (check the level of IS across runs, try and plot means or medians for the types of interest), contrast this run with the subsequent and prior ones on the same instrument (most informative if they were done using the same working/spiking/stock solutions). If the trend is not a one-off case then one of the next runs may face issues with saturation or other. The between-runs IS plots whichever way you create them can be very, very informative and highly predictive for the purpose of telling if the method or instrument is about to quit on you, but note I started a tread on this some months ago and no-one on this forum acknowledged what a genius I am :-D:-D:-D:-D

If another instrument on the same power line in the lab was active at the time this happened, even if it was on another project and with other transitions, try and look if you saw a fluctuation there, too. the lab may not grant you permission to look at this if they are independent.

The injector may be unstable. From this plot it cannot be determined if the injected volume has an upward trend or if is the instrument sensitivity which varies (or, both, or hypothetically a case of within-run variations in (matrix effect on) ionisation properties, something I have not manifestly faced ever). At the investigation stage, I do not recommend to speculate too heavily into the root cause when defining what to investigate, because then you might be restricting yourself a bit in terms of the data collected. But I know this is difficult and certainly impossible to completely avoid.

And check the various temperatures over time, as applicable (vaporiser, heat block, etc).
That info may not be easily accessible but it is usually extractable on systems provided by the big vendors.

If you figure the cause out, I'd love to learn of the outcome.:-):-):-)


Edit: Merged with a later (now deleted) post. [Helmut]

I could be wrong, but...

Best regards,
ElMaestro

No, of course you do not need to audit your CRO if it was inspected in 1968 by the agency of Crabongostan.
ksreekanth
☆    

India,
2020-07-06 13:37
(114 d 00:27 ago)

@ Shuanghe
Posting: # 21653
Views: 3,335
 

 An example of IS response. Need your helps.

In agreement with all the opinions of the panel.

To keep my views
  1. Bracketing of CC standards might have resolved the issue for investigation.
  2. ISR results might be supportive.
  3. Apart from acceptance criteria mentioned, SOP should define criteria for number of such samples showing Out of trend (OOT) for single run as well as overall runs so that there will be provision for repeating samples or batch.
  4. This situation can be handled with proper investigation in areas like:
    1. Instrument performance in continuous to such run either through multiple equilibration injections or through PPG showing stable or increased response.
    2. Extraction method used, performed in single or different processing batches, same or different lots of cartridges and solutions used, single or multiple analysts involved in processing, any changes to the solutions on the system made, any changes to the needle set point made in MS during the run, run size evaluation met for the same number of samples in this run.
Since only single run showing such trend, to be resolved only through proper initial investigation.
dshah
☆    

India,
2020-08-01 07:32
(88 d 06:32 ago)

@ Shuanghe
Posting: # 21815
Views: 2,082
 

 Trend analysis as per recent USFDA guideline

Dear All!

If we consider the recent FDA guideline or ICH M10, i believe trend analysis for IS response, CC and QCs is also required. FDA does ask if there is upward or downward trend observed. The important point for SOP of CRO should be - allowed limit for IS response. If this not limited for that 3 subjects, then i believe CRO's practice and SOP needs to be revised.

Regards,
Dshah


Edit: Guidelines linked. [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-08-01 14:13
(87 d 23:50 ago)

@ dshah
Posting: # 21816
Views: 2,058
 

 Only for QCs; encouraged ≠ required

Hi dshah,

» If we consider the recent FDA guideline or ICH M10, i believe trend analysis for IS response, CC and QCs is also required.

Only for QCs and not required but encouraged (FDA page 30, ICH page 46).
However, Shuanghe’s batch passed the criteria and hence, possibly we will not see any trend of back-calculated concentrations. But of course, the shift in IS-responses is obvious.

@Shuanghe: Can you send me a file (CSV preferred) by PM with this columns?

injection, type of sample (containing B, Q, C, U), back-calculated concentration, IS response

I will upload it to have sumfink to play with. Bland-Altman plots might be an option. An idea: Set limits based on the one batch of the method validation which was performed in the anticipated maximum size of a run. If you have data of this batch, please give it as well.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
dshah
☆    

India,
2020-08-03 09:17
(86 d 04:47 ago)

@ Helmut
Posting: # 21818
Views: 1,996
 

 Only for QCs; encouraged ≠ required

Hi Helmut!
As per standard SOP - "If the response of internal standard in a sample varies by more than 60% for the isotopically labeled internal standard from the mean ISTD response of CC and QC samples of a particular entire run, then repeat the analysis of the sample under reason Significant variations in response of internal standard"
However - there is no guideline which specifies the limit for variation for IS. Thus, it could be 30/40/50/60 - as per CRO's SOP/requirement/Experience.
The interesting thing could be - whether the QC's analyte/IS standard response are within acceptable range - i.e. whether the QC's were acceptable? if the IS standards response are high, there is possibilities that the QC will be on lower side. I am wondering what other things can be directly correlated and what can be the reason for CRO's acceptability of variation in IS response?
Regards,
Dshah
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-08-07 13:28
(82 d 00:36 ago)

@ dshah
Posting: # 21832
Views: 1,645
 

 Deviation from the mean response

Hi dshah,

» As per standard SOP - "If the response of internal standard in a sample varies by more than 60% for the isotopically labeled internal standard from the mean ISTD response of CC and QC samples of a particular entire run, then repeat the analysis of the sample under reason Significant variations in response of internal standard"

Let’s try that with Shuanghe’s example:

data          <- read.csv("https://bebac.at/downloads/Post21469.csv",
                          sep = ",", dec = ".")
data          <- data[which(data$type == "C" | data$type == "Q"), c(1:2, 4)]
res           <- cbind(data, mean = mean(data$IS.area))
res$deviation <- sprintf("%+.4f%%", 100*(res$IS.area-res$mean)/res$mean)
print(res, row.names = FALSE)

 injection type  IS.area     mean deviation
         1    C 462564.6 490891.8  -5.7706%
         2    C 427928.5 490891.8 -12.8263%
         3    C 436855.6 490891.8 -11.0078%
         4    C 462800.4 490891.8  -5.7225%
         5    C 467778.6 490891.8  -4.7084%
         6    C 481169.4 490891.8  -1.9806%
         7    C 479012.9 490891.8  -2.4199%
         8    C 435110.2 490891.8 -11.3633%
         9    C 464425.2 490891.8  -5.3915%
        10    C 483497.2 490891.8  -1.5064%
        22    Q 470286.3 490891.8  -4.1976%
        29    Q 479924.4 490891.8  -2.2342%
        36    Q 457085.6 490891.8  -6.8867%
        44    Q 479256.7 490891.8  -2.3702%
        51    Q 462271.3 490891.8  -5.8303%
        58    Q 474794.6 490891.8  -3.2792%
        66    Q 480581.6 490891.8  -2.1003%
        73    Q 463233.0 490891.8  -5.6344%
        80    Q 475543.7 490891.8  -3.1266%
        88    Q 476850.6 490891.8  -2.8604%
        95    Q 470929.6 490891.8  -4.0665%
       102    Q 469649.8 490891.8  -4.3272%
       110    Q 483853.7 490891.8  -1.4337%
       117    Q 543378.7 490891.8 +10.6922%
       124    Q 529264.0 490891.8  +7.8168%
       132    Q 647562.5 490891.8 +31.9155%
       139    Q 602118.0 490891.8 +22.6580%
       146    Q 677244.1 490891.8 +37.9620%

With 60% limited deviation of the quoted SOP there would be no reason to repeat the run.

» However - there is no guideline which specifies the limit for variation for IS. Thus, it could be 30/40/50/60 - as per CRO's SOP/requirement/Experience.

Correct. Here only your 30% would “work”.
IMHO, this approach is problematic. It depends on many things: The number of CCs and QCs, where they are placed within the run, etc. In Shuanghe’s example we have 22 before troubles started (according to the broken-stick regression at injection 107) and 6 after. Hence, the overall mean of 490892 is misleading (closer to the 466434 in the 1st limb than to the 580570 in the 2nd). You would have to be very restrictive to set the maximum deviation.

Little bit less restrictive if we look only at the QCs (acc. to the FDA’s guidance and the ICH draft):

 injection type  IS.area     mean deviation
        22    Q 470286.3 507990.4  -7.4222%
        29    Q 479924.4 507990.4  -5.5249%
        36    Q 457085.6 507990.4 -10.0208%
        44    Q 479256.7 507990.4  -5.6563%
        51    Q 462271.3 507990.4  -9.0000%
        58    Q 474794.6 507990.4  -6.5347%
        66    Q 480581.6 507990.4  -5.3956%
        73    Q 463233.0 507990.4  -8.8107%
        80    Q 475543.7 507990.4  -6.3873%
        88    Q 476850.6 507990.4  -6.1300%
        95    Q 470929.6 507990.4  -7.2956%
       102    Q 469649.8 507990.4  -7.5475%
       110    Q 483853.7 507990.4  -4.7514%
       117    Q 543378.7 507990.4  +6.9663%
       124    Q 529264.0 507990.4  +4.1878%
       132    Q 647562.5 507990.4 +27.4753%
       139    Q 602118.0 507990.4 +18.5294%
       146    Q 677244.1 507990.4 +33.3183%


» The interesting thing could be - whether the QC's analyte/IS standard response are within acceptable range - i.e. whether the QC's were acceptable?

In Shuanghe’s example the run passed. AFAIK, the CRO didn’t assess the IS response at all.

» I am wondering what other things can be directly correlated …

Some possibilities were discussed before.

» … and what can be the reason for CRO's acceptability of variation in IS response?

I can only guess (arbitrary order):
  • Not mentioned in any guideline, nothing to worry about.
  • Followed the SOP blindly. The run passed all criteria, what’s the problem?
  • Bad science. :thumb down:
    Back in the days were bioanalysts were proud of their scientific attitudes, such a run would definitely have started ringing the alarm bell.
Shuanghe is on vacation. Maybe he can share some insights once back.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Shuanghe
★★  

Spain,
2020-08-03 13:56
(86 d 00:08 ago)

@ Helmut
Posting: # 21819
Views: 1,994
 

 Only for QCs; encouraged ≠ required

Hi Helmut,

» @Shuanghe: Can you send me a file (CSV preferred) by PM with this columns?
»

injection, type of sample (containing B, Q, C, U), back-calculated concentration, IS response

I will upload it to have sumfink to play with. Bland-Altman plots might be an option.

I sent 2 emails with csv and excel file, the latter has much more columns in case you needed it.

» An idea: Set limits based on the one batch of the method validation which was performed in the anticipated maximum size of a run.

Damn, that's a nice idea. I didn't thought about it during discussion with CRO about the modification of their SOP.

» If you have data of this batch, please give it as well.

Sorry, I don't have it.

All the best,
Shuanghe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-08-04 18:04
(84 d 20:00 ago)

@ Shuanghe
Posting: # 21822
Views: 1,898
 

 Broken-stick regression

Hi Shuanghe,

» I sent 2 emails with csv and excel file, the latter has much more columns in case you needed it.

THX!

» » An idea: Set limits based on the one batch of the method validation which was performed in the anticipated maximum size of a run.
»
» Damn, that's a nice idea. I didn't thought about it during discussion with CRO about the modification of their SOP.

We can perform a [image] broken-stick regression:

library(segmented)
data <- NULL # nullify eventual previous one
data <- read.csv("https://bebac.at/downloads/Post21469.csv", sep = ",", dec = ".",
                 stringsAsFactors = TRUE)
data <- data[!is.na(data$IS.area), ] # remove ones without IS
attach(data)
set.seed(123456)
mod <- lm(IS.area ~ injection, data = data)
seg <- segmented(mod)
con <- predict(seg, se.fit = TRUE, interval = "confidence")
pre <- suppressWarnings(predict(seg, se.fit = TRUE, interval = "prediction"))
summary(seg); confint(seg); slope(seg, digits = 9); intercept(seg, digits = 9)$injection
if (is.null(attr(dev.list(), "names"))) windows(width = 6, height = 5)
op  <- par(no.readonly = TRUE)
par(mar=c(4, 4, 0, 0.3) + 0.1, cex.axis = 1, cex.lab = 1.25)
col <- c("blue", "red", "darkgreen", "magenta")
bg  <- c("#0000FF80", "#FF000080", "#00900080", "#FF00FF80")
plot(seg, ylim = range(IS.area, na.rm = TRUE),
     xlab = "Injection Number", ylab = "IS Area", lwd = 3, col = "blue")
grid()
abline(v = confint(seg), col = "blue", lty = c(1, 3, 3))
text(confint(seg)[2], max(IS.area, na.rm = TRUE), labels = round(confint(seg)[2]),
     pos = 2, cex = 0.9)
text(confint(seg)[1], min(IS.area, na.rm = TRUE), labels = round(confint(seg)[1]),
     pos = 4, cex = 0.9)
text(confint(seg)[3], max(IS.area, na.rm = TRUE), labels = round(confint(seg)[3]),
     pos = 4, cex = 0.9)
lines(as.numeric(dimnames(pre$fit)[[1]]), con$fit[, "lwr"], col = "blue")
lines(as.numeric(dimnames(pre$fit)[[1]]), con$fit[, "upr"], col = "blue")
lines(as.numeric(dimnames(pre$fit)[[1]]), pre$fit[, "lwr"], col = "blue", lty = 2)
lines(as.numeric(dimnames(pre$fit)[[1]]), pre$fit[, "upr"], col = "blue", lty = 2)
points(injection[type == "C"], IS.area[type == "C"], pch = 21, col = col[1], bg = bg[1],
       cex = 1.35)
points(injection[type == "B"], IS.area[type == "B"], pch = 21, col = col[2], bg = bg[2],
       cex = 1.35)
points(injection[type == "Q"], IS.area[type == "Q"], pch = 21, col = col[3], bg = bg[3],
       cex = 1.35)
points(injection[type == "U"], IS.area[type == "U"], pch = 21, col = col[4], bg = bg[4],
       cex = 1.35)
par(op)


Which gives

        ***Regression Model with Segmented Relationship(s)***

Call:
segmented.lm(obj = mod)

Estimated Break-Point(s):
                   Est. St.Err
psi1.injection 107.392  1.054

Meaningful coefficients of the linear terms:
              Estimate Std. Error t value Pr(>|t|)   
(Intercept)  458563.17    3100.28 147.910  < 2e-16 ***
injection       151.47      49.54   3.058  0.00269 **
U1.injection   5439.53     221.22  24.589       NA   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 15150 on 135 degrees of freedom
Multiple R-Squared: 0.9462,  Adjusted R-squared: 0.945

Convergence attained in 2 iter. (rel. change 0)
                  Est. CI(95%).low CI(95%).up
psi1.injection 107.392     105.306    109.477
$injection
            Est.   St.Err.   t value  CI(95%).l CI(95%).u
slope1  151.4664  49.53669  3.057662   53.49811  249.4348
slope2 5591.0012 215.60080 25.932191 5164.60917 6017.3932
                Est.
intercept1  458563.2
intercept2 -125596.8


The fit is fine, you can try

psi   <- round(confint(seg)[1])
limb1 <- data[(data$injection <= psi), ]
limb2 <- data[(data$injection > psi), ]
st.r  <- rstudent(seg)
inj   <- as.numeric(names(st.r))
ylim  <- c(-1, 1)*max(abs(st.r)) # symmetric
col   <- c("blue", "red")
bg    <- c("#0000FF80", "#FF000080")
plot(inj, st.r, type = "n", ylim = ylim, xlab = "Injection",
     ylab = "Studentized residual")
grid(); box(); abline(h = 0)
rug(inj, ticksize = 0.015); rug(st.r, side = 2, ticksize = 0.015)
points(inj[inj <= psi], st.r[inj <= psi], pch = 21, col = col[1],
       bg = bg[1], cex = 1.35)
points(inj[inj > psi], st.rs[inj > psi], pch = 21, col = col[2],
       bg = bg[2], cex = 1.35)
plot(seg$fitted, st.r, type = "n", ylim = ylim, xlab = "Fitted",
     ylab = "Studentized residual")
grid(); box(); abline(h = 0)
rug(seg$fitted, ticksize = 0.015); rug(st.r, side = 2, ticksize = 0.015)
points(seg$fitted[inj <= psi], st.r[inj <= psi], pch = 21, col = col[1],
       bg = bg[1], cex = 1.35)
points(seg$fitted[inj > psi], st.r[inj > psi], pch = 21, col = col[2],
       bg = bg[2], cex = 1.35)


[image]

12 |stud. res| > 2 (8.6%) and 1 > 3 (0.7%). Doesn’t worry me.
1st residual plot: No trend, no funnel shape, similar spread in the two limbs (in the second a bit larger).
2nd residual plot: The two clusters are evident which, of course, we know from looking at the entire batch (fit, 95% confidence and prediction intervals, breakpoint and its 95% confidence interval):

[image]

It doesn’t need a spectacled rocket scientist to see that the slopes are different.
Well, we can test that.* H0 is that the difference of slopes = 0.

pscore.test(seg, k = nrow(data), alternative = "two.sided")

Gives

        Score test for one/two changes in the slope

data:  formula = IS.area ~ injection
breakpoint for variable = injection
model = gaussian , link = identity , method = segmented.lm
observed value = 26.288, n.points = 139,
p-value < 2.2e-16
alternative hypothesis: two.sided   (1 breakpoint)

Gotcha!

» » If you have data of this batch, please give it as well.
» Sorry, I don't have it.

I will try to simulate some stuff based on the first part. Then we could set control limits based on Bland-Altman.


Edit: I was asking myself how sensitive this approach is, i.e., how much can the slopes differ to get a significant p-value. Crude method: I fitted the 2nd limb and gradually decreased the slope until it was equal to the one of the 1st limb.

psi     <- round(confint(seg)[1])
slope   <- as.numeric(slope(seg, digits=15)$injection[, 1])
intcpt  <- as.numeric(intercept(seg, digits=15)$injection[, 1])
limb1   <- data[(data$injection <= psi), c(1, 4)]
limb2   <- data[(data$injection > psi), c(1, 4)]
fit1    <- intcpt[1]+slope[2]*limb1$injection
fit2    <- intcpt[2]+slope[2]*limb2$injection
sims    <- 50
slp.new <- seq(slope[2], slope[1], length.out = sims)
int.new <- seq(intcpt[2], intcpt[1], length.out = sims)
noise   <- intcpt[2]+slope[2]*limb2$injection - limb2$IS.area
res     <- data.frame(slope = rep(slp.new, each = nrow(limb2)),
                      intercept = rep(int.new, each = nrow(limb2)),
                      inj = limb2$injection, fit = NA,
                      noise = rep(noise, length(slp.new)),
                      IS.area = NA)
for (j in 1:nrow(res)) {
  res$fit[j]     <- res$intercept[j]+res$slope[j]*res$inj[j]
  res$IS.area[j] <- res$fit[j]+res$noise[j]
}
names(res)[3] <- "injection"
sens <- data.frame(slope.ratio = slp.new/slope[1],
                   p.value = NA, sign = FALSE)
for (j in 1:sims) {
  tmp <- res[, c(1, 3, 6)]   sel <- which(tmp$slope == slp.new[j])
  tmp <- rbind(limb1, tmp[sel, 2:3])
  m1  <- lm(IS.area ~ injection, data = tmp)
  s1  <- segmented(m1)
  sens$p.value[j] <- pscore.test(s1, k = nrow(tmp),
                                 alternative = "two.sided")$p.value
  if (sens$p.value[j] < 0.05) sens$sign[j] <- TRUE
}
sens[, 1:2] <- signif(sens[, 1:2], 4)
print(sens, row.names = FALSE)

Gives

 slope.ratio   p.value  sign
      36.910 3.408e-55  TRUE
      36.180 3.607e-54  TRUE
      35.450 3.946e-53  TRUE
      34.710 4.468e-52  TRUE
      33.980 5.237e-51  TRUE
      33.250 6.357e-50  TRUE
      32.520 7.998e-49  TRUE
      31.780 1.043e-47  TRUE
      31.050 1.412e-46  TRUE
      30.320 1.982e-45  TRUE
      29.580 2.889e-44  TRUE
      28.850 4.372e-43  TRUE
      28.120 6.871e-42  TRUE
      27.380 1.121e-40  TRUE
      26.650 1.900e-39  TRUE
      25.920 3.343e-38  TRUE
      25.190 6.102e-37  TRUE
      24.450 1.155e-35  TRUE
      23.720 2.266e-34  TRUE
      22.990 4.599e-33  TRUE
      22.250 9.649e-32  TRUE
      21.520 2.088e-30  TRUE
      20.790 4.650e-29  TRUE
      20.060 1.063e-27  TRUE
      19.320 2.486e-26  TRUE
      18.590 5.924e-25  TRUE
      17.860 1.432e-23  TRUE
      17.120 3.492e-22  TRUE
      16.390 8.538e-21  TRUE
      15.660 2.078e-19  TRUE
      14.930 4.994e-18  TRUE
      14.190 1.174e-16  TRUE
      13.460 2.670e-15  TRUE
      12.730 5.811e-14  TRUE
      11.990 1.194e-12  TRUE
      11.260 2.283e-11  TRUE
      10.530 4.000e-10  TRUE
       9.795 6.318e-09  TRUE
       9.062 8.835e-08  TRUE
       8.329 1.074e-06  TRUE
       7.596 1.115e-05  TRUE
       6.863 9.702e-05  TRUE
       6.130 5.533e-04  TRUE
       5.397 3.312e-03  TRUE
       4.665 1.615e-02  TRUE
       3.932 6.266e-02 FALSE
       3.199 1.904e-01 FALSE
       2.466 4.514e-01 FALSE
       1.733 8.453e-01 FALSE
       1.000 7.228e-01 FALSE

The slope of the 2nd limb can be ~4 times higher than the 1st till it gets detected.

[image]


TODO: Scale the noise (which was higher in the 2nd than in the 1st limb.


  • Muggeo VMR. Testing with a nuisance parameter present only under the alternative: a score-based approach with application to segmented modelling. J Stat Comp Sim. 2016;86(15):3059–3067. doi:10.1080/00949655.2016.1149855.

Dif-tor heh smusma 🖖
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Activity
 Admin contact
21,179 posts in 4,414 threads, 1,474 registered users;
online 5 (0 registered, 5 guests [including 2 identified bots]).
Forum time: Wednesday 13:04 CET (Europe/Vienna)

Actually, science starts to become interesting
only where it ends.    Justus von Liebig

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5