Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log-in |  Register |  Search

Back to the forum  Query: 2017-11-18 12:59 CET (UTC+1h)
 
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-07 09:18
(edited by yjlee168 on 2015-12-07 11:14)

Posting: # 15699
Views: 11,798
 

 bear v2.7.0 released [R for BE/BA]

Dear all,

I have released bear v2.7.0 to sourceforge. One minor change in this release is to check all dataset first to make sure that all subjects' λz (terminal-phase elimination rate constant, or 'ke' or 'kel') can be calculated before going to NCA, based on selected λz calculation method (ARS, AIC, TTT, TTTARS or TTTAIC). This error has been reported by some users (such as here and here). The error messages are something like Error in NAToUnknown.default(x = ke, unknown = 0) : 'x' already has value “0”

Basically, ARS or AIC uses the same criteria to check the dataset (OK if > 2 data points after Tmax). TTT and its combination forms use the criteria of "OK if > 2 data points after 2*Tmax". The not-OK subject's data will be listed on R console/terminal. If there is any subject found not meet the criteria of selected λz, bear will be aborted and go back to the top menu. If this is the case, users need to decide to (i) change the λz calculation method (manual selection?); or (ii) consider to delete the entire subject from dataset if necessary.

That's it and thank you for reading.

Ps. The dataset checking function was built inside the original codes (not on the menu list); so if the dataset is OK, bear continues as usual. [added after OP]

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
ElMaestro
Hero

Denmark,
2015-12-07 10:12

@ yjlee168
Posting: # 15700
Views: 10,642
 

 Suggestion

Hi yjlee,

» I have released bear v2.7.0 (...)

Good work. I think Bear is coming close to being a solution for CROs that do not have access to expensive commercial software.
In a future version, could you add an option to generate randomisation schemes for the various BE designs?
:ok:

I could be wrong, but…


Best regards,
ElMaestro

No, I still don't believe much in the usefulness of IVIVCs for OIPs when it comes to picking candidate formulations for the next trial. This is not the same as saying I don't believe in IVIVCs.
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-07 10:55

@ ElMaestro
Posting: # 15701
Views: 10,511
 

 Suggestion

Dear Elmaestro,

» [...] In a future version, could you add an option to generate randomisation schemes for the various BE designs?

Thank you so much. It's a really great idea. I will put this into the top of to-do-list. The other one is considering BE dataset simulation based on the in-vitro dissolution profile and PBPK (IVIVC + PBPK -> IVIVE??). Also there is a TSD waiting for years...

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2015-12-07 14:01

@ yjlee168
Posting: # 15703
Views: 10,618
 

 Suggestion

Hi Yung-jin,

» » [...] In a future version, could you add an option to generate randomisation schemes for the various BE designs?
»
» […] It's a really great idea. I will put this into the top of to-do-list.

One option is to include Detlew’s package randomizeBE as a dependency when building the package. Then you don’t have to re-invent the wheel and only need to handle the in-/output within bear.

» The other one is considering BE dataset simulation based on the in-vitro dissolution profile and PBPK (IVIVC + PBPK -> IVIVE??).

Wow!

» Also there is a TSD waiting for years...

According to EMA’s Q&A (Rev.7 of 2013-02-13) the model for the final analysis (pooled data) is:
stage, sequence, sequence × stage, subject(sequence × stage), period(stage), formulation.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-07 18:51

@ Helmut
Posting: # 15705
Views: 10,514
 

 Suggestion

Dear Helmut,

» One option is to include Detlew’s package randomizeBE as a dependency when building the package. Then you don’t have to re-invent the wheel and only need to handle the in-/output within bear.

Excellent package! I'll study randomizeBE more to see how to arrange I/O (just install it :-D).

» According to EMA’s Q&A (Rev.7 of 2013-02-13) the model for the final analysis (pooled data) is:
» stage, sequence, sequence × stage, subject(sequence × stage), period(stage), formulation.

Thank you for your information. Same as this by Detlew.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Astea
Regular

Russia,
2015-12-10 16:36

@ yjlee168
Posting: # 15717
Views: 10,314
 

 bear v2.7.0 released

Dear yjlee168!

Firstly, thank you a lot for your unique program!

I've got some questions about calculations. Do I understand correctly that BEAR can calculate only AUCall but not AUClast?
What is the formula for AUCinf calculation? I guess it is like AUCinf=AUCall+C/lambda_z, where C is the observed concentration in the last point, so AUCinf is actually AUCinf_obs, based on AUCall, am I right?

Why does BEAR crash when there is a zero-concentration in the last point? (I change only one point: last point in concentrations of the first subject in the Single2x2x2 demo-data and BEAR returns:

Error in
lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
NA/NaN/Inf in 'y'

It allows calculation only for the manual selection of points. Thats very uncomfortably especially when we want to get primary information on the confidence intervals for AUC and Cmax. There is no need to calculate lambda_z and AUCinf if we just want to calculate AUC and Cmax...
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-10 17:23
(edited by yjlee168 on 2015-12-10 18:55)

@ Astea
Posting: # 15718
Views: 10,310
 

 if Clast is zero in bear

Dear Astea,

» I've got some questions about calculations. Do I understand correctly that BEAR can calculate only AUCall but not AUClast?

No, bear calculates AUC0-last first, then AUC0-inf which is equal to AUC0-last + Clastz (Clast: the last measurable conc., not the last conc.).

» What is the formula for AUCinf calculation? I guess it is like AUCinf=AUCall+C/lambda_z, where C is the observed concentration in the last point, so AUCinf is actually AUCinf_obs, based on AUCall, am I right?

:confused: Yes, I guess you're correct if your AUCall is AUC0-last here by definition.

» Why does BEAR crash when there is a zero-concentration in the last point? (I change only one point: last point in concentrations of the first subject in the Single2x2x2 demo-data and BEAR returns:

I see. That's because linear regression function (lm) in R cannot allow any y = 0. That's because we take logarithm of conc. (y) first and then do linear regression with log(conc.) vs. time (x). The error is due to log(0).[edited] You should remove the last point that its conc. value is zero; Alternatively, you can input as 'NA' instead of zero. Usually these are the conc. below the lower quantitative limit. They are not really zero, I think.

» Error in
» lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
» NA/NaN/Inf in 'y'

» It allows calculation only for the manual selection of points.

Not quite correct here. You may not require manual selection for data points if you remove the zero conc. after Tmax or just input as 'NA'.

» ...There is no need to calculate lambda_z and AUCinf if we just want to calculate AUC and Cmax...

Agree (in some countries or areas). But most of time, we need to calculate AUC0-inf. Thus, the calculation of lambda_z will be required. BTW, if you just need Cmax and AUC, removing the data points with conc.= 0 will not affect the final results. I will consider to remove the data points with conc.=0 after Tmax automatically in next release.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Astea
Regular

Russia,
2015-12-10 20:50

@ yjlee168
Posting: # 15720
Views: 10,303
 

 if Clast is zero in bear

Dear yjlee168!

Thanks a lot for your answer!
The problem is in understanding definitions, there are so many AUC (see this thread)...

WinNonLin use definitions AUClast for AUC to the last mesuarable concentration and AUCall for all square under the curve. There are also two types of AUC0-inf: predicted and observed. I think, BEAR calculates AUC0-inf=AUClast+Cobs/lambdaz, isn't it?

» » It allows calculation only for the manual selection of points.
»
» Not quite correct here. You may not require manual selection for data points if you remove the zero conc. after Tmax or just input as 'NA'.

I've made several calculations and came to the conclusion that when we manually select points in the situation with last zero point we get wrong results on AUCinf. Correct method as you already pointed out was to exclude the last point or to name it NA:

There are some my calculations below:
initial data:
"subj","seq","prd","time","conc"
1,2,2,0,0
1,2,2,0.25,36.1
1,2,2,0.5,125
1,2,2,0.75,567
1,2,2,1,932
1,2,2,1.5,1343
1,2,2,2,1739
1,2,2,3,1604
1,2,2,4,1460
1,2,2,8,797
1,2,2,12,383
1,2,2,24,0

that is demo 2x2x2 data with zero last point.

If we try to calculate this set with manual selection of points (3) we get:

BEAR:
AUC=AUCall=14013.275
AUCinf=AUCinf_obs on AUCall=14013.275

Alternative calculation in another program ("by hand" in excel):
points 3: 9 to 11
Cmax 1739
AUClast 11715,275
AUCall 14013,275
AUCinf_pred on AUClast 14054,16
AUCinf_obs on AUClast 14004,99
AUCinf_obs on AUCall 14013,275

But when we exclude the last point or name it "NA" the calculations are correct:

exclude point:
"subj","seq","prd","time","conc"
1,2,2,0,0
1,2,2,0.25,36.1
1,2,2,0.5,125
1,2,2,0.75,567
1,2,2,1,932
1,2,2,1.5,1343
1,2,2,2,1739
1,2,2,3,1604
1,2,2,4,1460
1,2,2,8,797
1,2,2,12,383

BEAR
AUC 11715.275
AUCinf 14004.992

Alternative:
AUClast 11715,275
AUCall 11715,275
AUCinf_pred on AUClast 14054,1622
AUCinf_obs on AUClast 14004,9918

Naming the last point "NA":
"subj","seq","prd","time","conc"
1,2,2,0,0
1,2,2,0.25,36.1
1,2,2,0.5,125
1,2,2,0.75,567
1,2,2,1,932
1,2,2,1.5,1343
1,2,2,2,1739
1,2,2,3,1604
1,2,2,4,1460
1,2,2,8,797
1,2,2,12,383
1,2,2,24,NA

BEAR:
AUC=AUClast=AUCall=11715,275
AUCinf=AUCinf_obs on AUClast 14004,9918

Alternative calculation:
AUClast 11715,275
AUCall 11715,275
AUCinf_pred on AUClast 14054,1622
AUCinf_obs on AUClast 14004,9918
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-11 03:28

@ Astea
Posting: # 15721
Views: 10,271
 

 if Clast is zero in bear

Dear Astea,

» The problem is in understanding definitions, there are so many AUC (see this thread)...

Thank you for your pointer. So bear uses Method #4 to calculate AUC0-inf.

» ... I think, BEAR calculates AUC0-inf=AUClast+Cobs/lambdaz, isn't it?

Yes, that's correct.

» I've made several calculations and came to the conclusion...

In order to verify your results as posted, could you please tell me (1) what AUC calculation method you used with bear ('all linear' or 'linear-up/log-down')? (2) what λz values you got with bear (when input 'NA' for the last data point) and with excel? and (3) what λz calculation did you use with non-manual selection of data points? Thanks again.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Astea
Regular

Russia,
2015-12-11 09:39

@ yjlee168
Posting: # 15723
Views: 10,236
 

 if Clast is zero in bear

Dear yjlee168!
I'm very grateful for your reply!

» In order to verify your results as posted, could you please tell me (1) what AUC calculation method you used with bear ('all linear' or 'linear-up/log-down')? (2) what λz values you got with bear (when input 'NA' for the last data point) and with excel? and (3) what λz calculation did you use with non-manual selection of data points? Thanks again.

1). I used linear-up/log-down method (1)
2). When 'NA' is marked for the last point BEAR gets λz in the following form (below I attach all the results for the verification):

<< NCA Outputs:- <Subj.# 1 > (Ref.)>>
--------------------------------------------------------
 subj  time   conc  AUC(0-t) AUMC(0-t)
    1  0.00    0.0     0.000     0.000
    1  0.25   36.1     4.513     1.128
    1  0.50  125.0    24.650    10.069
    1  0.75  567.0   111.150    71.037
    1  1.00  932.0   298.525   240.694
    1  1.50 1343.0   867.275   977.319
    1  2.00 1739.0  1637.775  2350.444
    1  3.00 1604.0  3309.275  6495.444
    1  4.00 1460.0  4841.275 11821.444
    1  8.00  797.0  9355.275 36253.444
    1 12.00  383.0 11715.275 58197.444
--------------------------------------------------------

<< Selected data points for lambda_z calculations >>
----------------------------------------------------
 subj seq prd drug time conc
    1   2   2    1    4 1460
    1   2   2    1    8  797
    1   2   2    1   12  383

  << Final PK Parameters >>
-----------------------------
           R sq. = 0.99698
Adj. R sq. (ARS) = 0.99397
        lambda_z = 0.16727
            Cmax = 1739
            Tmax = 2
            Cl/F = 7.1403
            Vd/F = 42.687
         T1/2(z) = 4.1439
        AUC(0-t) = 11715
      AUC(0-inf) = 14005
       AUMC(0-t) = 58197
     AUMC(0-inf) = 99363
        MRT(0-t) = 4.9677
      MRT(0-inf) = 7.0948
-----------------------------


Excel calculations:
lambda_z=0,167269591
AUClast=11715,2750
AUCinfobs=14004,9918


And when we leave "0" and choose manual selection (three points):

<< NCA Outputs:- <Subj.# 1 > (Ref.)>>
--------------------------------------------------------
 subj  time   conc  AUC(0-t) AUMC(0-t)
    1  0.00    0.0     0.000     0.000
    1  0.25   36.1     4.513     1.128
    1  0.50  125.0    24.650    10.069
    1  0.75  567.0   111.150    71.037
    1  1.00  932.0   298.525   240.694
    1  1.50 1343.0   867.275   977.319
    1  2.00 1739.0  1637.775  2350.444
    1  3.00 1604.0  3309.275  6495.444
    1  4.00 1460.0  4841.275 11821.444
    1  8.00  797.0  9355.275 36253.444
    1 12.00  383.0 11715.275 58197.444
    1 24.00    0.0 14013.275 85773.444
--------------------------------------------------------

<< Selected data points for lambda_z calculations >>
----------------------------------------------------
 Subj Time Conc
    1    4 1460
    1    8  797
    1   12  383

  << Final PK Parameters >>
-----------------------------
           R sq. = 0.99698
Adj. R sq. (ARS) = 0.99397
        lambda_z = 0.16708
            Cmax = 1739
            Tmax = 2
            Cl/F = 7.1361
            Vd/F = 42.71
         T1/2(z) = 4.1485
        AUC(0-t) = 14013
      AUC(0-inf) = 14013
       AUMC(0-t) = 85773
     AUMC(0-inf) = 85773
        MRT(0-t) = 6.1209
      MRT(0-inf) = 6.1209
-----------------------------

Excel calculations:
lambda_z=0,167269591
AUClast=11715,2750
AUCinfobs=14004,9918


It seems to me very strange that λz in BEAR are slightly different: 0.16708 and 0.16727. And the question is why the first method (that is calculating with manually selected 3 points) fails.
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-11 17:16

@ Astea
Posting: # 15724
Views: 10,128
 

 if Clast is zero in bear

Dear Astea,

» 1). I used linear-up/log-down method (1)
» 2). When 'NA' is marked for the last point BEAR gets λz in the following form (below I attach all the results for the verification):

OK. Thanks.

» ...
» And when we leave "0" and choose manual selection (three points):
» »<< NCA Outputs:- <Subj.# 1 > (Ref.)>>
» --------------------------------------------------------
» subj  time   conc  AUC(0-t) AUMC(0-t)
» ...
»     1  3.00 1604.0  3309.275  6495.444
»     1  4.00 1460.0  4841.275 11821.444
»     1  8.00  797.0  9355.275 36253.444
»     1 12.00  383.0 11715.275 58197.444
»     1 24.00    0.0 14013.275 85773.444 <-

The zero-Clast should not be included even using manual selection method, though it will not cause any error in bear. This is because the extrapolated AUC (i.e. AUCt-inf) will become zero (since Clast is zero). I don't think this is a correct approach. Looks like that I have to take care of this error with bear.

» ...
» It seems to me very strange that λz in BEAR are slightly different: 0.16708 and 0.16727. And the question is why the first method (that is calculating with manually selected 3 points) fails.

Oh? any error message? Thanks again.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-15 10:21
(edited by yjlee168 on 2015-12-15 19:48)

@ yjlee168
Posting: # 15729
Views: 9,605
 

 bear v2.7.1

Dear all,

I have released bear v2.7.1 to sourceforge. The major change was to fix the reported error. The error will occur if there is any zero conc. included when calculating λz (the terminal-phase elimination rate constant). It's because log(0) is equal to -Inf as shown in error message. If switching to manual selection of data points to avoid this error (not to select any zero conc.), it may complete the run without any error. However, it will result with AUC0-t = AUC0-inf and AUCt-inf (extrapolated AUC) = 0 since Clast = 0. Actually, I don't know if we should call it as AUC0-t or AUC0-inf by definition. Although this has been mentioned in bear's website about data input, I decided to fix this error by throwing 'NA' to replace any conc. = 0 occurring at time ≠ 0. This error was also reported elsewhere (1 & 2) at this Forum by other users. Thanks for reading.

» » Error in
» »  lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
» »   NA/NaN/Inf in 'y'

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Astea
Regular

Russia,
2015-12-15 19:32

@ yjlee168
Posting: # 15733
Views: 9,555
 

 bear v2.7.1

Dear yjlee168!

Great thanks for your attention to this problem!

» I decided to fix this error by throwing 'NA' to replace any conc. = 0 occurring at time ≠ 0

What about zero-samples that are not in the last elimination part of the curve? I mean that theoretically there can be a zero-concentration point between other non-zero values (for example for the drugs with entero-hepatic recirculation)? The result of the calculation would depend on whether this point is marked like "zero" or "NA", wouldn't it?
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-15 20:17

@ Astea
Posting: # 15734
Views: 9,551
 

 BLLOQ ≠ 0 with measured drug plasma conc.

Dear Astea,

» What about zero-samples that are not in the last elimination part of the curve? I mean that theoretically there can be a zero-concentration point between other non-zero values (for example for the drugs with entero-hepatic recirculation)? The result of the calculation would depend on whether this point is marked like "zero" or "NA", wouldn't it?

Good question. Well, theoretically there should be no real zero conc. at all after taking a medicine for a reasonable sampling time period of a BE/BA study. Of course, there may have some conc. below LLOQ, occurring at any time point. I don't quite understand why we will possibly have real zero conc. for the drug undergoing entero-hepatic recirculation.:confused: Are they just below LLOQ? Could you explain more? Thanks.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2015-12-16 01:45

@ yjlee168
Posting: # 15735
Views: 9,492
 

 BLLOQ ≠ 0 with measured drug plasma conc.

Dear Yung-jin, Astea, and all others,

» Well, theoretically there should be no real zero conc. at all after taking a medicine for a reasonable sampling time period of a BE/BA study. Of course, there may have some conc. below LLOQ, occurring at any time point.

Strong agree.

» I don't quite understand why we will possibly have real zero conc. for the drug undergoing entero-hepatic recirculation.:confused: Are they just below LLOQ?

Yep, very likely they are. See these threads: #14584, #14925, #10603, #4182. I like Simon’s term “embedded BQLs”. I think that is very difficult for a silicon-based life form to guess which PK may underly such profiles. The easiest way to avoid them is a more sensitive bio­analytical method… I know, I know. Shit happens. Whatever one chooses to do – it should be done consistently across all subjects and treatments. Sorry for no better news. Don’t blame the messenger.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-16 11:38

@ Helmut
Posting: # 15736
Views: 9,469
 

 BLLOQ ≠ 0 with measured drug plasma conc.

Dear Helmut,

Thank you for your pointers of previous discussions. From the current practice of a BE/BA study, I still don't know, when we get a BQL for a conc. from analytical report, how to judge it's real zero (as illustrated by weidson's example/plot. subject# 16) or a BQL. My question is: has any analytical lab. had a chance to report real zero measured conc. in the analytical report? :confused:

» » ... why we will possibly have real zero conc. for the drug undergoing entero-hepatic recirculation.:confused: Are they just below LLOQ?
»
» Yep, very likely they are. See these threads: #14584, #14925, #10603, #4182. I like Simon’s term “embedded BQLs”...

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2015-12-16 12:40

@ yjlee168
Posting: # 15737
Views: 9,491
 

 BLLOQ ≠ 0 ∀ t > 0

Hi Yung-jin,

» […] From the current practice of a BE/BA study, I still don't know, when we get a BQL for a conc. from analytical report,

Easy hopefully: If it is <LLOQ. ;-)
Current practice is to give a non-nunmeric value. Sometimes I see zeros in reports, but that’s stupid for any t > 0 (for theoretical reasons – as we discussed above). Furthermore:
  • In calibration (regardless the model) we have an intercept. How likely is it that we “measure” something that will give after back-calculation exactly zero? Practically zero…
  • According to BMV-guidelines the LLOQ is defined as the lowest concentration which can be measured with acceptable (in)accuracy and (im)precision. Would that be poss­ible for zero? No way. Even with a “wonder-method” how would one calculate the CV% = 100 × SD  / 0 = ?
  • At the first Crystal City conference (1990) there were heated debates about this issue. The consensus was to not (!) report a result as zero. Wasn’t questioned ever since. Extrapolation <LLOQ and >ULOQ is not acceptable. Well, in the FDA’s draft guidance on BMV (September 2013) we read an amazing sentence: “Concentrations below the LLOQ should be reported as zeros.” Browsing the comments I’m pretty sure that this oddity will be trashed in the final version.

» how to judge it's real zero (as illustrated by weidson's example/plot. subject# 16) or a BQL.

IMHO, the chance that it is “real zero” is ≈0.

» My question is: has any analytical lab. had a chance to report real zero measured conc. in the analytical report? :confused:

They have. If the analysts are no-brainers and follow stupid SOPs, yes.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2015-12-16 18:22

@ Helmut
Posting: # 15738
Views: 9,433
 

 BLLOQ ∋ { 0, ≈0, BLLOQ{...} ∀ t > 0}

Dear Helmut,

Thank you so much for your great information and explanations.:ok: I will try to keep all these in my mind.;-)

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Astea
Regular

Russia,
2016-01-30 17:11

@ yjlee168
Posting: # 15901
Views: 7,422
 

 BLLOQ at first two time ponts, AUC calculation

Dear yjlee168!

Please check the calculation of AUC_last in the situation when we have two zero (BLLOQ) points in concentration data. It seems to me that there are appeared an error in the latest version of BEAR. May be it is closely connected with the corrections you've made to improve data with BLLOQ in the elimination part of the curve.

Let's start with the Single2x2x2_demo: if we will change the second concentration to zero the AUC_0t would be 14451,88, that is greater on 0.25*1/2*125 from the real value: 14436,25.

Data set: "subj","seq","prd","time","conc"
1,2,2,0,0
1,2,2,0.25,0
1,2,2,0.5,125
1,2,2,0.75,567
1,2,2,1,932
and so on

AUC calculation method: 1 (all linear)

Please correct me if I'm wrong...
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-01-30 19:00

@ Astea
Posting: # 15902
Views: 7,434
 

 zero-zero with AUC calculation in bear

Dear Astea,

I see your points. You are absolutely right. Apparently bear calculates the first AUC from time zero, not from the second data point at time 0.25 when it has zero conc. since it has been removed. I probably should remove the data point at time zero in this situation and keep the one at time 0.25. Interesting issue. I will think how to fix this later. Thanks a lot.

» [...]
» Let's start with the Single2x2x2_demo: if we will change the second concentration to zero the AUC_0t would be 14451,88, that is greater on 0.25*1/2*125 from the real value: 14436,25.
» [...]

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
mittyri
Senior

Russia,
2016-02-01 22:16

@ yjlee168
Posting: # 15911
Views: 7,196
 

 BQL rules in bear

Dear Yung-jin,

What about more tricky datasets?
Data set: "subj","seq","prd","time","conc"
1,2,2,0,0
1,2,2,0.25,0
1,2,2,0.5,0
1,2,2,0.75,1
1,2,2,1,0
1,2,2,2,5
1,2,2,3,100
1,2,2,4,200

I think you are close enough to pre-processing of datasets (as in PHX - BQL rules, I can send you PDF guide if needed, there are some examples of rules), so the users can define their own rules of data handling (missing, BLOQ depending on position of data).
Good luck!

Kind regards,
Mittyri
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-02 10:59

@ mittyri
Posting: # 15915
Views: 7,156
 

 BQL rules in bear

Dear mittyri,

Yes, I have already taken your example dataset into consideration since last Astea's message. That's why I am still thinking how to fix it once for all. The situation is possible for delayed oral absorption (there is a Tlag), though I still wonder how to make sure if it is due to a delayed absorption or it is a true BLOQ (as we have had a lot of discussions at this Forum). Anyway, the time of first data point with conc. = 0 is critical for AUC calculation.

» What about more tricky datasets?
» Data set: "subj","seq","prd","time","conc"
» 1,2,2,0,0
» 1,2,2,0.25,0
» 1,2,2,0.5,0
» 1,2,2,0.75,1
» [...]
» I think you are close enough to pre-processing of datasets (as in PHX - BQL rules, I can send you PDF guide if needed, there are some examples of rules), so the users can define their own rules of data handling (missing, BLOQ depending on position of data).

Thank you so much for your kindness. But that won't be necessary.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-02 21:30
(edited by yjlee168 on 2016-02-03 08:46)

@ mittyri
Posting: # 15921
Views: 7,096
 

 possible solutions?

Dear Astea and mittyri,

I think that I have found the solution for your datasets. This time I keep all user's input data unchanged. It used to assign a 'NA' to any conc.=0 except the first data point (time =0). Calculations for AUC and AUMC are fine for all-linear trapezoidal method. If linear-up/log-down trapezoidal method is applied, the following situations will switch to all-linear: (1) Ci=Ci-1; (2) Ci>Ci-1; and (3) Ci=0 or Ci-1=0 (new) for the trapezoidal calculation between the data point of (ti-1, Ci-1) and (ti, Ci).

So for Astea's dataset (all linear),I got   
 subj  time conc  AUC(0-t) AUMC(0-t)
    1  0.00    0     0.000     0.000
    1  0.25    0     0.000     0.000
    1  0.50  125    15.625     7.812
    1  0.75  567   102.125    68.781
    1  1.00  932   289.500   238.438
    1  1.50 1343   858.250   975.062
    1  2.00 1739  1628.750  2348.188
    1  3.00 1604  3300.250  6493.188
    1  4.00 1460  4832.250 11819.188
    1  8.00  797  9346.250 36251.188
    1 12.00  383 11706.250 58195.188
    1 24.00   72 14436.250 96139.188


And with same Astea's dataset (linear-up/log-down),   
 subj  time conc  AUC(0-t) AUMC(0-t)
    1  0.00    0     0.000     0.000
    1  0.25    0     0.000     0.000
    1  0.50  125    15.625     7.812
    1  0.75  567   102.125    68.781
    1  1.00  932   289.500   238.438
    1  1.50 1343   858.250   975.062
    1  2.00 1739  1628.750  2348.188
    1  3.00 1604  3299.341  6513.416
    1  4.00 1460  4830.212 11859.468
    1  8.00  797  9211.243 37267.003
    1 12.00  383 11471.007 59317.527
    1 24.00   72 13703.908 95940.683


for mittyri's dataset (all linear),   
 subj  time conc  AUC(0-t) AUMC(0-t)
    1  0.00    0     0.000     0.000
    1  0.25    0     0.000     0.000
    1  0.50  125    15.625     7.812
    1  0.75    0    31.250    15.625
    1  1.00  932   147.750   132.125
    1  1.50 1343   716.500   868.750
    1  2.00 1739  1487.000  2241.875
    1  3.00 1604  3158.500  6386.875
    1  4.00 1460  4690.500 11712.875
    1  8.00  797  9204.500 36144.875
    1 12.00  383 11564.500 58088.875
    1 24.00   72 14294.500 96032.875


The only thing remained unsolved is how to do a semi-log plot for these zero conc.? It will be skipped if included for the plot. Then we will see a broken line since log(0) cannot be plotted. Any suggestion for this? Thanks in advanced.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-03 09:04
(edited by yjlee168 on 2016-02-03 09:49)

@ mittyri
Posting: # 15922
Views: 7,048
 

 zero (0), missing (NA), or BLLOQ (0 or NA) in bear

Dear mittyri and others,

I guess the issue (zero, BLLOQ or missing) may not be solved quickly. Thus I leave the decision to users if it is a real zero conc. (0) or a missing (NA), or a BLLOQ (0 or NA). Basically, the drug plasma conc. should be input as either a zero or a NA in bear. If it is a NA, it will be ignored (or deleted). If it is a zero conc., it will be treated as a number in AUC/AUMC calculation. However, since a zero conc. cannot be used to estimate λz, any zero conc. will not be counted as a valid data point. Like the following example:

-> the subj# 1 (Ref.) in sequence# 2, period# 2,
    λz (kel) cannot be calculated with ARS or AIC.
 subj seq prd drug  time conc
    1   2   2    1  0.00    0
    1   2   2    1  0.25    0
    1   2   2    1  0.50  125
    1   2   2    1  0.75    0
    1   2   2    1  1.00  932
    1   2   2    1  1.50 1343
    1   2   2    1  2.00 1739
    1   2   2    1  3.00 1604
    1   2   2    1  4.00    0
    1   2   2    1  8.00  797
    1   2   2    1 12.00    0
    1   2   2    1 24.00    0

» I think you are close enough to pre-processing of datasets (as in PHX - BQL rules, I can send you PDF guide if needed, there are some examples of rules), so the users can define their own rules of data handling (missing, BLOQ depending on position of data).

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
mahmoud-teaima
Regular

Cairo, Egypt,
2016-02-08 21:28
(edited by Ohlbe on 2016-02-09 00:48)

@ yjlee168
Posting: # 15959
Views: 6,760
 

 zero (0), missing (NA), or BLLOQ (0 or NA) in bear

Dear yjlee168,
i got almost the same error message with bear_2.7.5 on R3.2.3 for mac.

--- the list of problematic subject data ---
--------------------------------------------
-> the subj# 18 (Test) in sequence# 1, period# 2,
λz (kel) cannot be calculated with ARS or AIC..
  subj seq prd drug  time   conc
   18   1   2    2   0.00   0.00
   18   1   2    2   0.25   4.17
   18   1   2    2   0.50  12.99
   18   1   2    2   1.00  18.23
   18   1   2    2   1.50  25.69
   18   1   2    2   2.00  42.52
   18   1   2    2   2.50  57.78
   18   1   2    2   3.00  66.04
   18   1   2    2   3.50  77.02
   18   1   2    2   4.00  27.46
   18   1   2    2   5.00  15.65
   18   1   2    2   6.00   9.24
   18   1   2    2   8.00   7.43
   18   1   2    2  10.00   0.92
   18   1   2    2  24.00   <NA>


-> the subj# 19 (Test) in sequence# 1, period# 2,
λz (kel) cannot be calculated with ARS or AIC..
  subj seq prd drug  time     conc
   19   1   2    2   0.00     0.00
   19   1   2    2   0.25    38.40
   19   1   2    2   0.50    47.86
   19   1   2    2   1.00   105.70
   19   1   2    2   1.50   117.36
   19   1   2    2   2.00   124.71
   19   1   2    2   2.50    66.82
   19   1   2    2   3.00    55.15
   19   1   2    2   3.50    39.34
   19   1   2    2   4.00    27.11
   19   1   2    2   5.00    18.35
   19   1   2    2   6.00    10.50
   19   1   2    2   8.00     7.37
   19   1   2    2  10.00     3.43
   19   1   2    2  24.00     0.250763777


is there any solution for it yet?

is there any option to work with these cases (2 subjects with error message) to calculate the parameter by manual selection of 2-6 point after Cmax?

Greetings.


Edit: formatted using BBCodes. Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Ohlbe]

Mahmoud Teaima, PhD.
Assistant Professor at Department of Pharmaceutics and Industrial Pharmacy.
Vice-director at Center of Applied Research and Advanced Studies (CARAS).
Faculty of Pharmacy, Cairo University, Cairo, Egypt.
Ohlbe
Hero

France,
2016-02-09 00:51

@ mahmoud-teaima
Posting: # 15960
Views: 6,731
 

 Please avoid Full Quotes (“TOFU”)!

Dear mahmoud-teaima,

When replying to a message, please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post. We had to edit most of your posts today.

Regards
Ohlbe
mahmoud-teaima
Regular

Cairo, Egypt,
2016-02-09 04:55

@ Ohlbe
Posting: # 15961
Views: 6,713
 

 error message in bear

Thanks for editing my posts.
I wish anyone have a solution for the problem and error in my previous post?.
Many thanks in advance.

Mahmoud Teaima, PhD.
Assistant Professor at Department of Pharmaceutics and Industrial Pharmacy.
Vice-director at Center of Applied Research and Advanced Studies (CARAS).
Faculty of Pharmacy, Cairo University, Cairo, Egypt.
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-09 06:40
(edited by yjlee168 on 2016-02-09 10:38)

@ mahmoud-teaima
Posting: # 15962
Views: 6,693
 

 please check your dataset file

Dear mahmoud-teaima,

Can you please check your dataset file (*.csv) carefully once again, especially for these two subjects? Focus on if there are the extra spaces or positions of numbers, etc.. BTW, what csv format do you use? It looks unlikely having such errors. Yes, you can try manual selection for data points. Go to *Edit setup files to change the selection for λz estimations.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
mahmoud-teaima
Regular

Cairo, Egypt,
2016-02-09 15:00

@ yjlee168
Posting: # 15963
Views: 6,683
 

 please check your dataset file

Dear YJLee,
thanks for your instructions.
i create the .csv file of my data using macoffice 2011, i did it more than a time.
when i used the ARS method for calculation of Kel, the same error method for subjects 18 and 19 pop up and when i tried to change to the manual selection for calculation of Kel, i got the following error message.

" Point Selection for Lambda_z Estimation
-------------------------------------------------------------------------------
This method is about to select data points for lambda_z estimation based on
the manual selection from each subject's log(conc.) vs. time plot.

1. MS-Windows OS:
Please use your mouse to click the data point on the graph to select
the data points for linear regression. Once you want to stop selection
for each plot, click the 'Stop' menu on the left corner of th plot
and click 'Stop locator.' Alternately, you can use click mouse right-
button to bring up hidden menu and click 'Stop' to stop selection.

2. Linux/Unix (Ubuntu) PC OS:
Use your mouse left (1st)-button to select and mouse right (2nd)-button
to stop selection.

3. Mac OS X:
Use your mouse button to select and hold down the Control key and click
(Control-click) to stop selection.

When finishing the current subject and the next subject's plot will show up.
Do the same procedures until finishing all subjects.

Note: The # of data points should be at least 2's and max. is 6.
Please do NOT choose any data point before Tmax.
...............................................................................


Press Enter to proceed...



****************************************************************************
Data for the Ref. Products:
----------------------------------------------------------------------------
AUC(0-t), AUC(0-inf), AUMC(0-t), AUMC(0-inf), λz, Cl/F, Vd, MRT,
and half-life (T1/2(z))

****************************************************************************


Error in if ("y" %in% log && any(ii <- y <= 0 & !is.na(y))) { :
missing value where TRUE/FALSE needed


Please advise me.

Greetings

Mahmoud Teaima, PhD.
Assistant Professor at Department of Pharmaceutics and Industrial Pharmacy.
Vice-director at Center of Applied Research and Advanced Studies (CARAS).
Faculty of Pharmacy, Cairo University, Cairo, Egypt.
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-09 15:38

@ mahmoud-teaima
Posting: # 15964
Views: 6,635
 

 please check your dataset file

Dear mahmoud-teaima,

Interesting. Could you e-mail me (yjlee168 at gmail.com) your dataset as .csv file format? I can take a look on it. Sorry about this.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2016-02-09 19:14

@ mahmoud-teaima
Posting: # 15965
Views: 6,600
 

 A possible R bug?

Dear mahmoud-teaima & useR,

After checking the dataset, there are two errors found:
...
5,1,2,0.5,_
...
23,2,2,0.25,<
...

However, bear/R still run without any warning message. Then some weird things happened. R totally misinterpreted the input data for all subjects. For example,
subj seq prd drug  time  conc
   18   1   2    2  0.00  0.00
   18   1   2    2  0.25  4.17
   18   1   2    2  0.50 12.99
   18   1   2    2  1.00 18.23
   18   1   2    2  1.50 25.69
   18   1   2    2  2.00 42.52
   18   1   2    2  2.50 57.78
   18   1   2    2  3.00 66.04
   18   1   2    2  3.50 77.02 <- correct one!
   18   1   2    2  4.00 27.46
   18   1   2    2  5.00 15.65
   18   1   2    2  6.00  9.24
   18   1   2    2  8.00  7.43
   18   1   2    2 10.00  0.92
   18   1   2    2 24.00  <NA>

 Tmax, Cmax = 6 614 <- Cmax = 614 at 6 hr? how come? :confused:
 
 subj seq prd drug  time        conc
   19   1   2    2  0.00        0.00
   19   1   2    2  0.25       38.40
   19   1   2    2  0.50       47.86
   19   1   2    2  1.00      105.70
   19   1   2    2  1.50      117.36
   19   1   2    2  2.00      124.71 <- correct one!
   19   1   2    2  2.50       66.82
   19   1   2    2  3.00       55.15
   19   1   2    2  3.50       39.34
   19   1   2    2  4.00       27.11
   19   1   2    2  5.00       18.35
   19   1   2    2  6.00       10.50
   19   1   2    2  8.00        7.37
   19   1   2    2 10.00        3.43
   19   1   2    2 24.00 0.250763777

 Tmax, Cmax = 8 560  <- Cmax = 560 at 8 hr? :confused:

If I replaced the underline with 'NA' for subject #5, then everything was back to normal.

subj seq prd drug  time  conc
   18   1   2    2  0.00  0.00
   18   1   2    2  0.25  4.17
   18   1   2    2  0.50 12.99
   18   1   2    2  1.00 18.23
   18   1   2    2  1.50 25.69
   18   1   2    2  2.00 42.52
   18   1   2    2  2.50 57.78
   18   1   2    2  3.00 66.04
   18   1   2    2  3.50 77.02
   18   1   2    2  4.00 27.46
   18   1   2    2  5.00 15.65
   18   1   2    2  6.00  9.24
   18   1   2    2  8.00  7.43
   18   1   2    2 10.00  0.92
   18   1   2    2 24.00    NA

 Tmax, Cmax = 3.5 77.02

 subj seq prd drug  time        conc
   19   1   2    2  0.00   0.0000000
   19   1   2    2  0.25  38.4000000
   19   1   2    2  0.50  47.8600000
   19   1   2    2  1.00 105.7000000
   19   1   2    2  1.50 117.3600000
   19   1   2    2  2.00 124.7100000
   19   1   2    2  2.50  66.8200000
   19   1   2    2  3.00  55.1500000
   19   1   2    2  3.50  39.3400000
   19   1   2    2  4.00  27.1100000
   19   1   2    2  5.00  18.3500000
   19   1   2    2  6.00  10.5000000
   19   1   2    2  8.00   7.3700000
   19   1   2    2 10.00   3.4300000
   19   1   2    2 24.00   0.2507638

 Tmax, Cmax = 2 124.71

Very interesting dataset. :ponder:

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Back to the forum Activity
 Thread view
Bioequivalence and Bioavailability Forum | Admin contact
17,481 Posts in 3,743 Threads, 1,083 registered users;
30 users online (1 registered, 29 guests).

I have had my results for a long time:
but I do not yet know how I am to arrive
at them.    Carl Friedrich Gauss

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
XHTML/CSS RSS Feed