kvoronov
☆

Russia,
2019-11-20 10:55
(915 d 08:40 ago)

Posting: # 20834
Views: 5,719

## Previously data point selection for lambda z estimate issue in bear [🇷 for BE/BA]

Hello BEBAC team,

when I tried to use dataset with previously selected points,
it seems that bear is trying to find columns:
"subj" "time" "conc" "conc_data" "drug"

but ***_lambda_z_select_all.RData contains:

"subj" "seq"  "prd"  "drug" "time" "conc"

and execution stops with
Error in int_abline(a = a, b = b, h = h, v = v, untf = untf, ...)

I have also tried to rename columns, but I get the different result with same initial data.

Kirill
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-20 14:07
(915 d 05:27 ago)

@ kvoronov
Posting: # 20839
Views: 4,828

## Previously data point selection for lambda z estimate issue in bear

Hi Kirill,

Thanks for posting the issue here. Th term 'previously data point selection for lambda z' is kind of confusing. Here it means only for previously manual selection data point, not for ARS, AIC, TTT or any others. The default file name for 'previously manual data point selection for lambda z' is '***_lambda_z_manual_select_all.RData'

» when I tried to use dataset with previously selected points,
» it seems that bear is trying to find columns:
» "subj" "time" "conc" "conc_data" "drug"

That's correct. This is the exact data format for 'previously manual data point selection for lambda z'. You have to select 'manual selection' under 'data point selection for lambda z estimate'. Then you will get this file (***_lambda_z_manual_select_all.RData) in your working directory.

» ...
» but ***_lambda_z_select_all.RData contains:
» "subj" "seq"  "prd"  "drug" "time" "conc"

This is correct too. However, this file is just a part of NCA outputs in bear. It is absolutely not the 'previously manual data point selection for lambda z'.

Hope this can help.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
kvoronov
☆

Russia,
2019-11-20 15:40
(915 d 03:55 ago)

@ yjlee168
Posting: # 20842
Views: 4,803

## Previously data point selection for lambda z estimate issue in bear

Thank you yjlee168!

Is it possible to take both advantage of ARS method and manual selection for certain subjects?

It is important, because manual selection allows to choose only 2-6 data points for estimation of λz.

Any help will be appreciated!
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-20 17:12
(915 d 02:23 ago)

@ kvoronov
Posting: # 20843
Views: 4,816

## combine ARS & manual selection for lambda z in bear

Hi Kirill,

» ...
» Is it possible to take both advantage of ARS method and manual selection for certain subjects?

Technically, it is of course possible; however, in regulatory aspect, I really don't know if it is OK to do so.

Here is how to do it technically:
1. remove all subjects with insufficient data points (if using ARS) from your original dataset (say, 'orig.csv') first and save the reduced data as 'reduced.csv';
2. run bear with dataset 'reduced.csv' and select ARS for lambda z estimate; if it runs OK, this step will generate '***_lambda_z_select_all.csv' (you already mentioned this previously);
3. finally run bear with your original (or full) dataset (i.e., 'orig.csv') and choose 'manual selection' this time for lambda z estimate; now you can manually select the data points based on '***_lambda_z_select_all.csv' (ARS method) indicated for all subjects included in 'reduced.csv', and also you can manually select data points for the subjects (these subjects are excluded from 'orig.csv') without sufficient data points (manual selection). Sound kind of complicated, but it should work.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Helmut
★★★

Vienna, Austria,
2019-11-20 17:32
(915 d 02:03 ago)

@ yjlee168
Posting: # 20844
Views: 4,771

Hi Yung-jin,

» Technically, it is of course possible; however, in regulatory aspect, I really don't know if it is OK to do so.

Why not? I do that for decades and never got a question from any agency.
• Any [sic] automatic algo performs badly for “flat” profiles (e.g., modified release) and terrible for multiphasic products.
• Visual inspection / adjustment recommended in the literature.
• Stated not only my SOP but also in the protocol: Automatic method first (I’m lazy and in most cases it works), adjustment second.
Of note, I get a lot of reports on my desk. Many were accepted by authorities despite crazy λz-estimations (see here and esp. there).

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-20 18:48
(915 d 00:47 ago)

@ Helmut
Posting: # 20845
Views: 4,891

Dear Helmut,

» Why not? I do that for decades and never got a question from any agency.

» — Any [sic] automatic algo performs badly for “flat” profiles (e.g., modified release) and terrible for multiphasic products.

Right. It is indeed quite possible for some modified releases. It makes sense.

» — Visual inspection / adjustment recommended in the literature.
» — Stated not only my SOP but also in the protocol: Automatic method first (I’m lazy and in most cases it works), adjustment second.

Smart strategy. Thanks for your messages. I learned.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Helmut
★★★

Vienna, Austria,
2019-11-20 18:59
(915 d 00:35 ago)

@ yjlee168
Posting: # 20846
Views: 4,781

Hi Yung-jin,

» » I do that for decades and never got a question from any agency.
»

My first NCA-code was for a TI-59 in 1981. Had even a thermo-printer!
Serious coding on an HP 9826 started in 1982 and continued on an HP 9836. No hard-drive… IIRC, Unix on 6 floppies and Pascal / Rocky Mountain BASIC on another 5 each.

When we switched to a Series 715 in 1993 (also serving our LAB/UX LIMS) …

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-20 21:05
(914 d 22:30 ago)

@ Helmut
Posting: # 20847
Views: 4,741

Dear Helmut,

It's really amazing and unbelievable. I think your first NCA codes running on TI-59 probably is the first NCA computer program in the world. I read your messages for many times and also browse all links that you provide. It really shocks me. It must be a very tough work to do programing with a TI-59 back to 1981. Frackly speaking, I still did not know what a 'computer' was in 1981. Not to mention computer programing. The only 'computer' I had in 1981 was a Casio palm calculator. My first computer was an Apple ][ around 1982-83. I used it to play games mostly.

Thanks for sharing your experiences in computer programing.

» My first NCA-code was for a TI-59 in 1981...

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Helmut
★★★

Vienna, Austria,
2019-11-21 00:53
(914 d 18:41 ago)

@ yjlee168
Posting: # 20848
Views: 4,800

## OT for nerds

Hi Yung-jin,

» It's really amazing and unbelievable. I think your first NCA codes running on TI-59 probably is the first NCA computer program in the world.

I don’t think so. I’m pretty sure that Carl Metzler’s NONLIN for a Sperry Univac mainframe could perform NCA left-handed. I wrote him a letter (yes, paper!) in 1982 and asked for the code. A couple of weeks later I received a 10.5" diameter reel of ½" 9-track magnetic tape. When I went to my IT department, they told me “Hey, we run a business here. We have COBOL! A FORTRAN compiler will cost 25,000 bucks. Go and find another playground.”

» It must be a very tough work to do programing with a TI-59 back to 1981.

It was. Learned the hard way to keep the code short. You had to find a compromise. The longer the code, the less storage registers you had and vice versa. But it was a pain in the back. Procedure was like this: Enter t/C-pairs for a subject. NCA performed and stored. When one formulation was finished, start over with the next one. Then the memory was almost full. Read another memory card, overwrite the NCA-code with the one for stats. Press a key and get the PE and CI. Studies were randomized but everybody used just a t-test. ANOVA on a TI-59? Didn’t even dream about it.

» Frackly speaking, I still did not know what a 'computer' was in 1981. Not to mention computer programing.

I started with a Philips P354 in 1976. Punch card reader and the dreary green bar paper. It was used by the Austrian national bank for some years and when we received it as a gift it showed signs of heavy use… The one in our IT-lab. →
Note the pile of paper in the left corner. When you made a single mistake in punching the cards, the output went straight there. My first code was the bisection algo for finding roots of arbitrary functions. Took me two weeks…
On the other hand we were lucky to have this 180+ kg monster when “time-sharing” was the norm: You send the code to the university’s computing center only to get two weeks later a hand-written note saying “Did not compile, execution stopped with buffer overflow. Revise.”

Heinrich Koch of the University of Vienna proudly showed me his HP 85 and the code he developed with Wolfgang Ritschel at the university of Cincinnati. Crashed during the demo. I was not convinced. There were rumors that together with a Wang 2200 you get PK software “for free”. Wang had no representative in Austria and I didn’t want to trust on promises of the one in Germany. First steps into the world of dektop computing with an HP 9845 learning Pascal.

» The only 'computer' I had in 1981 was a Casio palm calculator. My first computer was an Apple ][ around 1982-83. I used it to play games mostly.

Saw my first one in New York in 1983. Didn’t know what to do with the one-eyed mouse.

» Thanks for sharing your experiences in computer programing.

Welcome. Now you know why I was so happy to start with R (1.4.0 in 2001). A simple console, weighted polynomial regression just one line of code. Wonderful. I still write the php-scripts of the forum in a simple editor. I have fancy IDEs (Eclipse, Aptana). Never use them. Okay, boomer

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
kvoronov
☆

Russia,
2019-11-21 21:57
(913 d 21:37 ago)

@ yjlee168
Posting: # 20855
Views: 4,592

## combine ARS & manual selection for lambda z in bear

Hello yjlee168, Hello Helmut!

I went through the steps described above. Results are not the same (because ARS in some cases chose more than 6 points) but still very close.

yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-21 22:15
(913 d 21:19 ago)

@ kvoronov
Posting: # 20856
Views: 4,656

## combine ARS & manual selection for lambda z in bear

Hi Kirill,

» I went through the steps described above. Results are not the same (because ARS in some cases chose more than 6 points) but still very close.

You said 'not the same...' about the final results. Compared to what? I get lost here.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
kvoronov
☆

Russia,
2019-11-22 13:57
(913 d 05:38 ago)

@ yjlee168
Posting: # 20860
Views: 4,569

## combine ARS & manual selection for lambda z in bear

Hi yjlee168,

So, I mean that when I choose 'manual selection' for lambda z estimate (step 3), I can manually select the data points based on previously generated '***_lambda_z_select_all.csv' (ARS method), but only for six points, although for some subjects '***_lambda_z_select_all.csv' contains more than six points.

Anyway, thanks again to you and Helmut!
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-22 20:48
(912 d 22:47 ago)

(edited by yjlee168 on 2019-11-22 21:38)
@ kvoronov
Posting: # 20862
Views: 4,532

## bear v2.8.7 is released.

Hi Kirill,

I see your point now. I have increased the max. # of data point selection when using manual selection up to 24 (just cannot remove the limitation) and have already uploaded v2.8.7 to Sourceforge now. I will remove the limitation for 6 data points selection later and will upload soon. It can be done very quickly. Hope it's not too late for your need. Thank you so much for your messages.

» ... but only for six points, although for some subjects '***_lambda_z_select_all.csv' contains more than six points.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
kvoronov
☆

Russia,
2019-11-25 18:14
(910 d 01:21 ago)

@ yjlee168
Posting: # 20869
Views: 4,018

## bear v2.8.7 is released.

Thank you a lot yjlee168! I really appreciate your help. The best support!
yjlee168
★★★

Kaohsiung, Taiwan,
2019-11-25 20:46
(909 d 22:49 ago)

@ kvoronov
Posting: # 20871
Views: 3,973

## auto switch from ARS to manual selection

Hi Kirill

Not quite. I am still not so happy about it. You still need to go through that 3 steps so far. That's really very manual. So I have already started adding a new feature that can automatically switch from ARS or other methods to manual selection. Currently, if one selects ARS for lambda z estimate, bear checks all subjects first to see if all are eligible to use ARS. If not, list all non-eligible subejcts and bear stops. So here is the new feature: bear will check if the subject (one by one) is eligible to use ARS. If yes, do ARS. If not, bear will automatically switch to manual selection for the user to continue. That's what I want. How does the idea sound?

» ... The best support!

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
kvoronov
☆

Russia,
2019-11-28 14:19
(907 d 05:16 ago)

@ yjlee168
Posting: # 20885
Views: 3,884

## auto switch from ARS to manual selection

Hello, yjlee168!

It would be great. Automation can save time and possible errors.
Thank you again!