Ken Peh
★    

Malaysia,
2014-08-14 19:40
(3937 d 05:07 ago)

Posting: # 13379
Views: 11,358
 

 no of samples above ULOQ [Bioanalytics]

Dear Members,

In a BE study involving 26 subjects, 20 subjects had drug level within standard calibration curve. However, the last 6 subjects had many points above ULOQ. We did not go for partial method validation as we thought the numbers would not be many. We continued the analysis until we completed all the samples.

We diluted and reanalyzed these samples. The total number of reanalyzed samples is about 11% of the total number of samples.

How many samples are allowed to be reanalyzed ? :confused: Will too many repeats invalid the study ? How many is considered as too many?

Your kind comment is highly appreciated.

Thank you.

Regards,
Ken
ElMaestro
★★★

Denmark,
2014-08-14 21:28
(3937 d 03:18 ago)

@ Ken Peh
Posting: # 13380
Views: 10,402
 

 no of samples above ULOQ

Hi Ken Peh,

❝ In a BE study involving 26 subjects, 20 subjects had drug level within standard calibration curve. However, the last 6 subjects had many points above ULOQ. We did not go for partial method validation as we thought the numbers would not be many. We continued the analysis until we completed all the samples.


What do you mean when you mention you did not go for partial validation?

❝ We diluted and reanalyzed these samples. The total number of reanalyzed samples is about 11% of the total number of samples.


❝ How many samples are allowed to be reanalyzed ? :confused: Will too many repeats invalid the study ? How many is considered as too many?


There are to the best of my knowledge no rules, and I think there is no big problem here as long as you have dilution integrity as part of the validation.
Diluted samples are as valid as undiluted samples.

You could run into light trouble if dilution and re-analysis of the samples in question is so labour-intensive that it makes it impossible for you to stick to your master schedule which is mainly a GLP thing cf 111m and which most regulators don't really do much about in practice. So in hindsight, perhaps you'll wish to check your planning ability and see what you can do so as to avoid too much dilution in the future.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-15 06:16
(3936 d 18:30 ago)

@ Ken Peh
Posting: # 13381
Views: 10,334
 

 Pattern?

Hi Ken,

❝ In a BE study involving 26 subjects […] the last 6 subjects had many points above ULOQ.


Sometimes concentrations in a few subjects are higher than anticipated, but this should occur at random. Why in the last six subjects? What makes them so special? If I would be an inspector that would ring my alarm bell. I would ask R how likely the observed pattern could occur by pure chance (the code takes a good while to complete; be patient – 28 minutes on my machine):

set.seed(123456)
n   <- 1e8                      # no. of simulations; at least 1 mio needed
x   <- c(rep(0, 20), rep(1, 6)) # 20 "normal" subjects and 6 ">ULOQ"
obs <- 0                        # counter for the "last 6" pattern
for(j in 1:n) { if(sum(tail(sample(x), 6)) == 6) obs <- obs + 1 }
cat("p =", signif(obs/n, 4), "\n")


IMHO p = 4.49·10–6 should trigger an internal investigation. Some ideas/questions:
  • Was the clinical part of the study performed in groups? If yes, which was the size of groups?
  • Any documented events in the analytical batches (cali­bra­tion, IS addition, detector response, etc.)?
  • Look at the subjects’ pre-/post-study lab exams. High lipid parameters in these subjects (matrix effects in LC/MS-MS if no stable isotope IS was used)?
  • Were the last six subjects underweight females and the others Sumo-wrestlers?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ken Peh
★    

Malaysia,
2014-08-17 22:31
(3934 d 02:15 ago)

@ Helmut
Posting: # 13397
Views: 10,166
 

 Pattern?

Dear ElMaestro and Helmut,

Thank you very much for your kind inputs and suggestions.
Highly appreciate.

Regards,
Ken
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-17 23:27
(3934 d 01:19 ago)

@ Ken Peh
Posting: # 13398
Views: 10,399
 

 Not deserving an answer?

Hi Ken,

❝ Thank you very much for your kind inputs and suggestions.

❝ Highly appreciate.


Welcome.

BTW, this is not the first time that we had additional questions in order to clarify things and you never answered. It would be nice to get a response next time. :smoke:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-08-18 01:17
(3933 d 23:29 ago)

@ Helmut
Posting: # 13399
Views: 10,203
 

 Combinatorics

Hi Helmut,

❝ ❝ In a BE study involving 26 subjects […] the last 6 subjects had many points above ULOQ.


Yes, that sounds fishy, I overlooked it.

❝ Sometimes concentrations in a few subjects are higher than anticipated, but this should occur at random. Why in the last six subjects? What makes them so special? If I would be an inspector that would ring my alarm bell. I would ask R how likely the observed pattern could occur by pure chance (the code takes a good while to complete; be patient – 28 minutes on my machine):

set.seed(123456)

❝ n   <- 1e8                      # no. of simulations; at least 1 mio needed

❝ x   <- c(rep(0, 20), rep(1, 6)) # 20 "normal" subjects and 6 ">ULOQ"

❝ obs <- 0                        # counter for the "last 6" pattern

❝ for(j in 1:n) { if(sum(tail(sample(x), 6)) == 6) obs <- obs + 1 }

❝ cat("p =", signif(obs/n, 4), "\n")


❝ IMHO p = 4.49·10–6 should trigger an internal investigation.


Your brain must have had too much carbon monoxide recently :-D:smoke:
Consider your resampled vector of zeros and ones - each of them have equal probability, including the one that ends with six ones in a row. So:
1. Your code will converge towards the same value as any other resampled x-vector. Means: If you had observed the need for dilution in e.g. subjects 4, 7, 13, 14, 19 and 25 (or whatever one would subjectively consider 'more random') your code would converge to the same probability.
2. That probability is 6!/(26!/(20!)).
You have 6 1's that you wish to place in your vector. The first 1 has 26 places in can go into, the second 1 has 25 it can go into, and so forth. That's how we get the denominator. In the numerator we have 6*5*4*3*2*1 ways to permute the 6 1's. They just all look the same.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-18 01:43
(3933 d 23:03 ago)

@ ElMaestro
Posting: # 13400
Views: 10,198
 

 Well done!

Hi ElMaestro,

❝ Your brain must have had too much carbon monoxide recently :-D:smoke:


Well possible.

1. Your code will converge towards the same value as any other resampled x-vector. […] your code would converge to the same probability.


I am / was aware of that. Asking for the last 6 is as reasonable as asking for an / any other specific one. I was not asking for the chance of seeing any arbitrary set of 6 in 26…

2. That probability is 6!/(26!/(20!)).


Gosh! With the aid of some more nicotine I should have remembered the binomial coefficient

      n!     
──────
k!(nk)!

instead of retreating to stupid sim’s.
THX for the nice tutorial!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-08-19 01:54
(3932 d 22:52 ago)

@ Helmut
Posting: # 13403
Views: 10,128
 

 Pattern?

Hi Helmut or others,

I kept thinking a little about an appropriate test for the issue of having to do reanalysis in the last six subjects. How do we test or measure for an unexpected pattern?
I kept coming back to ideas of runs tests which as far as I know tend to be quite insensitive when there are not too many points (here: subjects). We could perhaps divide the 26 subjects into two halves and quantify the probability of having all issues in one half etc. But it does seem too crude. We could divide the subjects into more chunks and perhaps under some circumstances get better resolution (?). OK 26 as in this specific case is only divisible by two primes but that's just a practicality.
Which test do you think would find appropriate use in these situations?

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-19 05:04
(3932 d 19:42 ago)

@ ElMaestro
Posting: # 13404
Views: 10,153
 

 Few runs?

Hi ElMaestro,

❝ […] How do we test or measure for an unexpected pattern?

❝ I kept coming back to ideas of runs tests which as far as I know tend to be quite insensitive when there are not too many points (here: subjects).


I wouldn’t say so. 20–6 are just two runs, Try:

require(tseries)
x <- as.factor(c(rep("normal", 20), rep(">ULOQ", 6)))
runs.test(x, alternative="less") # test against "under-mixing"

        Runs Test

data:  x
Standard Normal = -4.7214,
p-value = 1.171e-06
alternative hypothesis: less

8 runs: p 0.1003 and 7 runs: p 0.03192; just two is not an awful lot…


❝ We could perhaps divide the 26 subjects into two halves and quantify the probability of having all issues in one half etc. But it does seem too crude. We could divide the subjects into more chunks and perhaps under some circumstances get better resolution (?). OK 26 as in this specific case is only divisible by two primes but that's just a practicality.


Why do you want to divide the subjects into more chunks at all? I think there are two issues:
  1. 6 out of 26 were out of range. That’s a high percentage, but shit happens. Either these high concentrations were “normal” (the analyst had insufficient information about the PK, aimed too low, :blahblah:) or sumfink went wrong (I had some ideas, but Ken didn’t respond). In the former case he should do better in the next study and in the latter try to find an explanation.
  2. Now we should additionally ask how likely we could see these results “clustered” – and not at random. Resampling would help to get an idea.

set.seed(123456)
n <- 1e5
s <- 0
for(j in 1:n) {
  if(runs.test(sample(x), alternative="less")$p.value < 0.05) s <- s+1
}
cat("significant runs:", signif(100*s/n, 4), "%\n")

significant runs: 7.005 %


❝ Which test do you think would find appropriate use in these situations?


I don’t know. I’m just thinking loud. If I’m not wrong, we can get a maximum of 13 runs, which is not significant:

x <- as.factor(c(rep(c("normal", ">ULOQ"), 6), rep("normal", 14)))
runs.test(x, alternative="greater") # test against "over-mixing"

        Runs Test

data:  x
Standard Normal = 1.5885, p-value = 0.05609
alternative hypothesis: greater


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-08-20 01:32
(3931 d 23:14 ago)

@ Helmut
Posting: # 13408
Views: 9,994
 

 Few runs?

Hi Hötzi,

❝ Why do you want to divide the subjects into more chunks at all?


It was just a hunch... The output of my walnut-sized organ which is a lousy excuse for a brain. The 'thinking' (for lack of a better term) was as follows:

We divide subjects into some chunks.
First half vs last half
First third vs Middle third vs Last third
First quartile vs etc.

We then test what the chance is that the observation is confined solely to the two extreme chunks. Likely just a very insensitive idea, but that was at least an attempt at explaining the rationale and increasing sensitivity by increasing the number of chunks (which has other drawbacks in itself).

I wonder how I can derive the exact probability given the sample size N, number of chunks M (yes, it would be handy if N can be divided by M) and the number oddballs Q. The probability P can be calculated exactly but that's one I will save for a sleepless night.:-D:-D:-D

Pass or fail!
ElMaestro
ElMaestro
★★★

Denmark,
2014-08-20 02:07
(3931 d 22:39 ago)

@ ElMaestro
Posting: # 13409
Views: 10,002
 

 Few runs?

Hmmmmmm.......


❝ I wonder how I can derive the exact probability given the sample size N, number of chunks M (yes, it would be handy if N can be divided by M) and the number oddballs Q. The probability P can be calculated exactly but that's one I will save for a sleepless night.:-D


Dammit, won't be able to sleep until I look a little at this shit:

Let K(A,B) be the coefficient A!/((A-B)!B!).
P(N,M,Q) is the probability that all the Q observations are in the last chunk.

The Q oddballs can be permuted among the N subjects in K(N,Q) ways. The Q oddballs can be permuted among the N/M extreme subjects in K(N/M, Q) ways.

Prob(Q oddballs in last chunk out of M chunks given N subjects)
= K(N/M, Q) / K(N,Q)

........hang on....That absolutely doesn't look right, does it???

Let's take a handy number like 60 subjects:
60 subjects, 3 chunks, 6 oddballs: P= 0.0008
Increase to 4 chunks: P=10-4 (it is lower, so better sensitivity in this case.)

But I have a feeling my combinatolophystic capabilities are betraying me here. Where's Dr. Rotter?

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-20 13:27
(3931 d 11:19 ago)

@ ElMaestro
Posting: # 13412
Views: 10,090
 

 Hazelnut-sized brain

Hi ElMaestro,

❝ […] I have a feeling my combinatolophystic capabilities are betraying me here.


Are you trying to overheat my hazelnut-sized brain? I would say statistics is just part of the job. First one has to find out whether the number of “>ULOQ-subjects” can be ex­plained. If yes, one could ask whether such clustering is (not) at random. If not, it’s just another mystery.

Maturity is the capacity to endure uncertainty.    John Finley


❝ Where's Dr. Rotter?


Still in Germany (the IP-address is ambiguous: Schleswig-Holstein or Bavaria)?
His profile tells me that he didn’t login since April 2010. I guess he lost interest in posting here. I miss his – sometimes sarcastic – comments…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,670 registered users;
24 visitors (0 registered, 24 guests [including 10 identified bots]).
Forum time: 00:47 CEST (Europe/Vienna)

An expert is someone who knows some of the worst mistakes
that can be made in his subject,
and how to avoid them.    Werner Heisenberg

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5