elba.romero
☆    

Guadalajara, Mexico,
2013-07-02 07:08
(3922 d 15:00 ago)

Posting: # 10907
Views: 42,296
 

 reference dataset [Software]

Dear Bear users:

I am interested to submit to the Secretary of Health in Mexico the first report of a clinical study using BEAR (2x2x2 crossover design). The preliminary review of my report indicates that I require verifying that my results are valid and it was recommended that I must demonstrate through a reference dataset the software performance. I have been searching since then... (NIST statistical Reference Dataset).

Any help would be really appreciated,

Regards,
Elba Romero
University of Guadalajara
Mexico
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-02 17:44
(3922 d 04:24 ago)

@ elba.romero
Posting: # 10914
Views: 40,318
 

 Public datasets

¡Hola Elba!

❝ The preliminary review of my report indicates that I require verifying that my results are valid and it was recommended that I must demonstrate through a reference dataset the software performance.


OK, validation makes sense. I wonder whether your agency would have also asked you if you submitted data evaluated by commercial software. :-D

You should quote the statement of bear’s site:

Validation: We have tested bear with one of BE dataset and found that all output results to be the same as those generated by commercial software (like NCA using WinNonlin and ANOVA with SAS).


Personally I use data sets from these references:
  1. Sauter R, Steinijans VW, Diletti E, Böhm E, Schulz H-U. Presentation of results from bioequivalence studies. Int J Clin Pharm Ther Toxicol. 1992; 30/Suppl.1: S7–30.
    If you consider this reference too old, the same data are available in:
    • Hauschke D, Steinijans V, Pigeot I. Bioequivalence Studies in Drug Development. Chichester: John Wiley; 2007.
  2. S-C Chow, Liu J-p. Design and Analysis of Bioavailability and Bioequivalence Studies. Boca Raton: Chapman & Hall/CRC Press;3rd ed. 2009.
    They use this (in)famous data set:
    • Clayton D, Leslie A. The bioavailability of erythromycin stearate versus enteric-coated erythromycin base when taken immediately before and after food. J Int Med Res. 1981; 9(6): 470–7.
    The same data set is part of Pharsight’s Phoenix/WinNonlin standard installation.
  3. Millard SP, Krause A, editors. Applied Statistics in the Pharmaceutical Industry. New York: Springer; 2001.
Ad #1: No idea how they evaluated the data; probably with SAS. Ad #2: SAS and Phoenix/WinNonlin. Ad #3: You can download Chapter 7; evaluation with S-Plus (scripts run on [image] with some minor modi­­fi­cations). Maybe you should start with this one because it is the only one giving Cook’s distance plot (which is part of bear’s output). If you want to validate bear’s NCA as well, you have to go with #1 because it is the only one containing the entire concentration data.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-03 05:10
(3921 d 16:58 ago)

@ Helmut
Posting: # 10917
Views: 39,929
 

 Public datasets

Dear Helmut and Elba,

So there is no "reference dataset" at all. Is that possible that we can have some kind of "reference datasets" like NIST StRD Nonlinear Regression Datasets or similar? Otherwise, in order to validate bear, users will need to pay for commercial computer programs. If so, why not just use commercial computer programs directly?

❝ [...] You should quote the statement of bear’s site:

Validation: We have tested bear with one of BE dataset and found that all out­put results to be the same as those generated by commercial software (like NCA using WinNonlin and ANOVA with SAS).


We had a pdf file for this and the link was missing unintentionally. I will put it back again. Sorry about this. However, it was not using the "reference dataset." pretty sure.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
elba.romero
☆    

Guadalajara, Mexico,
2013-07-03 07:19
(3921 d 14:49 ago)

@ yjlee168
Posting: # 10919
Views: 40,033
 

 Public datasets

Dear Yung-jee Lee,

❝ So there is no "reference dataset" at all. Is that possible that we can have some kind of "reference datasets" like NIST StRD Nonlinear Regression Datasets or similar?


I have already written to NIST StRD asking for this type of datasets. I am still waiting for an answer...

I was wondering if there is any initiative in this forum to develop "public data reference" in order to evaluate the performance of BEAR (or another package).

Thank you for your help,

Regards,
Elba Romero
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-03 15:11
(3921 d 06:57 ago)

@ elba.romero
Posting: # 10926
Views: 39,801
 

 Public datasets

Dear Elba,

❝ [...]

❝ I have already written to NIST StRD asking for this type of datasets. I am still waiting for an answer...


Well, I don't think NIST StRD will do that. But will be happy to know their response.

❝ I was wondering if there is any initiative in this forum to develop "public data reference" in order to evaluate the performance of BEAR (or another package).


Sounds do-able. May need more opinions and time to plan it. There are plenty BE/BA guru members in this Forum from worldwide. They should be able to figure out how-to sooner or later.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-03 18:32
(3921 d 03:36 ago)

@ yjlee168
Posting: # 10930
Views: 39,883
 

 Challenging data sets

Dear Yung-jin & Elba,

❝ ❝ I was wondering if there is any initiative in this forum to develop "public data reference" in order to evaluate the performance of BEAR (or another package).


❝ Sounds do-able. May need more opinions and time to plan it. There are plenty BE/BA guru members in this Forum from worldwide. They should be able to figure out how-to sooner or later.


I’m a little bit short in time, but I give you some ideas on what I have done back in the dark-ages validating my 60,000-lines Pascal code. ;-) A very important point in validation is to challenge the software with extreme data. Are errors trapped, are the results reasonable, etc.
I’ll split it up into two parts: NCA and BE.

NCA

► Data-limits

  • Generate a dataset containing only the headers, but do data. Does the software crash or trap the error and come up with a useful (!) error-message?
  • Dataset with only one line (i.e, time+concentration).
  • Dataset with empty lines or – in R-terms – ‘NAs’.
  • Dataset with negative or non-numeric values.

► Cmax/tmax

  • Two ore more identical values. Is the first one identified as Cmax/tmax?

λz-estimation

  • Less than three decreasing values.
  • Cz ≥ Cz-1.
  • 0 or NA within the decreasing values.
  • Dataset with known λz.
    Example: t(0, 1, 2), C(1, 0.5, 0.25) should give exactly λz=ln(2) and t½=1.
  • Simple dataset (use the AUC-example below) and calculate λz from the last three values manually.
    1. Yi=ln(Ci)=(4.04305, 2.94444, 1.38629)
    2. t=(∑ti)/3=13.33333 and Y=(∑Yi)/3=2.79126
    3. λz=-∑(ti-t) (Yi-Y)/(ti-t)²=0.13260 and
      t½=ln(2)/λz=0.69315/0.13260=5.22729.
    4. If you want to calculate AUC0-∞ based on the estimated Ĉz (instead of the observed one) you need also the regression’s intercept:
      ln(C0)=Y-λzt=2.79126-0.13260×13.33333=4.55928
      All calculations in full precision; final result rounded to the software’s precision.
      A note for cross-validators: In PHX/WNL you can request this value in
      NCA > Options > Model Settings > [×] Intermediate Output.
      Results in the Core output:
      Intermediate Output
      -------------------
      Value for Lambda_z: 0.1326, and intercept: 4.5593

► Trapezoidal rules

  • Dataset with only one concentration >0.
  • Dataset with two identical concentrations (lin-up/log-down should switch to linear).
  • Dataset which can easily be calculated manually (KISS!). Example:
    t   C inc? linear                ∑ lin-up/log-down                      ∑
     0  0  NA                    0   0                            0         0
     1 62  y  ( 1- 0)(62+ 0)/2= 31  31 apply linear              31        31
     2 70  y  ( 2- 1)(70+62)/2= 66  97 apply linear              97        97
     4 57  n  ( 4- 2)(57+70)/2=127 224 ( 4- 2)(70-57)/ln(70/57)=126.55518 223.55518
    12 19  n  (12- 4)(19+57)/2=304 528 (12- 4)(57-19)/ln(57/19)=276.71272 500.26791
    24  4  n  (24-12)( 4+19)/2=138 666 (24-12)(19- 4)/ln(19/ 4)=115.52201 615.78992

    Perform the manual calculation to the same precision as the software’s output.

► AUC0-∞

  • AUC0-∞=AUCt+Cz,obs/λz (based on the observed Cz) should give
    666+4/0.13260=696.16555 (linear trapezoidal; extrapolated 4.3331%) and
    615.78992+4/0.13260=645.95547 (lin-up/log-down; extrapolated 4.6699%).
  • For the second variant we need the predicted Cz,pred=ℯln(C0)–λztz=ℯ4.55928–0.13260×24=3.96238. Then
    666+3.96238/0.13260=695.88220 (linear trapezoidal; extrapolated 4.2941%) and
    615.78992+3.96238/0.13260=645.67212 (lin-up/log-down; extrapolated 4.6281%).

BE (coming sooner or later…)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-04 10:06
(3920 d 12:02 ago)

(edited by yjlee168 on 2013-07-04 12:20)
@ Helmut
Posting: # 10935
Views: 39,724
 

 Fantastic idea? ⇒ Challenging data sets

Dear Helmut,

Thank you for calling me "Yung-jin" again. Haha...

❝ [...]


❝ I’m a little bit short in time, but I give you some ideas on what I have done back in the dark-ages validating my 60,000-lines Pascal code. ;-) A very important point in validation is to challenge the software with extreme data. Are errors trapped, are the results reasonable, etc.

❝ I’ll split it up into two parts: NCA and BE.

❝ [...]



Marvelous! I think I can code all these stuffs as a part of bear. It's just a function in R. Then users can generate the validation outputs if they need the validation report. The output will contain two parts: one is the results (with calculated answers) obtained from R (not necessarily from bear), and the other is manual calculation or the method other than using R (validation) which has been already done like you did in this thread [edited]. What do you think about this idea?

❝ [...]


615.78992+3.96238/0.13260=645.67212 (lin-up/log-down; extrapolated 4.6281%).

BE (coming sooner or later…)


I know. Take you time. No hurry at all, as we still wait for the response from NIST...

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-04 15:18
(3920 d 06:50 ago)

@ yjlee168
Posting: # 10937
Views: 39,804
 

 Fantastic idea!

Hi Yung-jin,

❝ Thank you for calling me "Yung-jin" again. Haha...


Sorry for using your family name in my previous posts (removed…). I thoughtlessly copied it from your signature. Didn’t want to be that formal.

❝ I think I can code all these stuffs as a part of bear. It's just a function in R. […] What do you think about this idea?


Great! Since it is the responsibility of users to validate the software no need to hurry. I would suggest to start a new thread collecting ideas: Which data sets, results with other software (or even wetware: pencil, paper, brain), etc.
Maybe the Software category would fit best because this issue is not limited to [image].


Edit: Category changed.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-03 16:28
(3921 d 05:40 ago)

@ yjlee168
Posting: # 10929
Views: 39,898
 

 NIST; cross-validation

Dear Yung-jin,

❝ So there is no "reference dataset" at all. Is that possible that we can have some kind of "reference datasets" like NIST StRD Nonlinear Regression Datasets or similar?


If Elba will succeed in convincing NIST of its importance we will have one… ;-)

Note that NIST’s datasets do not necessarily represent the “truth”, only the “best-available” solutions. Quote from the background information:

The generated datasets are designed to challenge specific computations. Real-world data include challenging datasets such as the Thurber problem, and more benign datasets such as Misra1a. The certified values are "best-available" solutions, obtained using 128-bit precision and confirmed by at least two different algorithms and software packages using analytic derivatives.


❝ Otherwise, in order to validate bear, users will need to pay for commercial computer programs.


Why? If you compare bear’s results with the published ones you don’t have to have the software which generated them.


P.S.: Yung-jin added the validation to bear’s website (162 pages, 4.45 MB). Now it’s up to the user to compare the different programs’ output. BTW: WinNonlin Pro 4.01, but which versions of bear and SAS?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-04 01:47
(3920 d 20:21 ago)

@ Helmut
Posting: # 10931
Views: 39,824
 

 NIST; cross-validation

Dear Helmut,

❝ If Elba will succeed in convincing NIST of its importance we will have one… ;-)


Agree.

❝ Note that NIST’s datasets do not necessarily represent the “truth”, only the “best-available” solutions. [...]



I see. But it has been good enough for comparing nonlinear regression software.

❝ Why? If you compare bear’s results with the published ones you don’t have to have the software which generated them.


To purchase commercial software to do validation first. The textbook or published papers may have final results in Tables for their dataset, but do not have whole detailed output results.


❝ P.S.: Yung-jin added the validation to bear’s website (162 pages, 4.45 MB). Now it’s up to the user to compare the different programs’ output. BTW: WinNonlin Pro 4.01, but which versions of bear and SAS?


bear was v2.4.0 (p.92) and SAS should be v9.1.3 I guessed when I was at KMU (university license).

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-04 02:45
(3920 d 19:23 ago)

@ yjlee168
Posting: # 10932
Views: 39,724
 

 NIST; cross-validation

Dear Yung-jin,

❝ ❝ Note that NIST’s datasets do not necessarily represent the “truth”, only the “best-available” solutions. […]


❝ I see. But it has been good enough for comparing nonlinear regression software.


Sure. It’s nice to have results obtained in 128-bit precision. Important for ‘flat’ SSQ-surfaces where rounding may be an issue.

❝ ❝ Why? [:blahblah:]


❝ […] The textbook or published papers may have final results in Tables for their dataset, but do not have whole detailed output results.


Agree. The most detailed one is #1 from this post.

❝ ❝ BTW: WinNonlin Pro 4.01, but which versions of bear and SAS?

❝ bear was v2.4.0 (p.92) and SAS should be v9.1.3 I guessed when I was at KMU (university license).


THX!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
elba.romero
☆    

Guadalajara, Mexico,
2013-07-03 07:10
(3921 d 14:58 ago)

@ Helmut
Posting: # 10918
Views: 39,871
 

 Public datasets

¡Hola Helmut!
It is nice to see spanish words once in a while.

❝ I wonder whether your agency would have also asked you if you submitted data evaluated by a commercial software. :-D


Not in this case! The Agency has started to learn about other tools ;-)

❝ You should quote the statement of...


I'll take into account.

❝ Personally I use data sets from these references...


Thank you for all the references!

Bear Welcome to Mexico!!!

Regards,

Elba Romero
University of Guadalajara
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-06 04:24
(3461 d 17:44 ago)

@ elba.romero
Posting: # 13653
Views: 37,422
 

 Eight Reference Datasets (2×2×2 ☑)

Dear Elba and all,

❝ […] 2x2x2 crossover design. […] I require verifying that my results are valid and it was recommended that I must demonstrate through a reference dataset the sof­tware per­for­mance. I have been searching since then...


Took a while… In the meantime some of the usual suspects of the forum have done that.*
You can expect more to come (2-group parallel ☐)…

Enjoy. :-D



Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-10-06 15:20
(3461 d 06:48 ago)

@ Helmut
Posting: # 13654
Views: 37,080
 

 What???

Respectable highly distinguished honourable sir,


❝ Took a while… In the meantime some of the usual suspects of the forum have done that.*


What??? You cannot be serious.



What a load of absolute garbage. I know some of the authors and to tell the truth I have no idea why they would want to waste anyone's time with such a pile of nonsense. On board my proud vessel we just hit the auto-IQ/OQ/PQ scripts and we never had any problems.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-06 19:08
(3461 d 03:00 ago)

@ ElMaestro
Posting: # 13655
Views: 36,997
 

 What???

Ahoy, my Capt’n!

❝ What??? [image] You cannot be serious.


Great!

[image]

❝ On board my proud vessel we just hit the auto-IQ/OQ/PQ scripts and we never had any problems.


:pirate:

Actually this was one of my favorite diving sites once I let go the anchor of my vessel. Nice slope followed by a drop-off. Now a grid of steel and a bunch of concrete blocks…

BTW, I found another reference dataset (Supplemental examples to the veterinary BE guidance VICH GL52). Interesting that the guidance does not suggest rounding of the CI). Quote from the supp.:

All correctly programmed analyzed data should give the following results (Tables 2 and 3). […]

Using the statistical output information, we calculated the confidence bounds:

Lower BE Bound = Exp(-0.0346) = 0.97
Upper BE Bound = Exp(0.0738) = 1.08

If both the lower and upper BE bounds are between 0.80 and 1.25, bioequivalence is established for that value.


I got for p (Sequence) 0.9529 (not 0.9527) and the CI as 0.9660–1.0766.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-10-06 19:31
(3461 d 02:37 ago)

@ Helmut
Posting: # 13656
Views: 36,948
 

 What???

Hi Hötzi,

❝ BTW, I found another reference dataset (...)


Great find, thanks for the link.

Now, I couldn't help but note they reffed Potvin as well.
Looks to me like their examples also include a Potvin B run. I am not a veterinarian but ... I think they got it wrong as they plugged in the observed GMR from stage 1 and not 0.95 for stage 2 dimensioning.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-06 21:05
(3461 d 01:03 ago)

@ ElMaestro
Posting: # 13657
Views: 37,275
 

 What the heck‽

Hi ElMaestro,

❝ Looks to me like their examples also include a Potvin B run. I am not a veterinarian but ... I think they got it wrong as they plugged in the observed GMR from stage 1 and not 0.95 for stage 2 dimensioning.


Yes, that’s weird. Subsection II.D.3 of the guidance smells of copypasting from EMA’s GL (contrary to Method C which is generally suggested by the FDA). Furthermore,

The plan to use a two-stage approach should be pre-specified in the protocol along with the number of animals to be included in each stage and the adjusted signi­fi­cance levels to be used for each of the analyses.

I beg your pardon?

Figure 1 of the supp. is Potvin’s B:

[…] sample size based on variance stage 1…

But in the paragraph below:

[…] sample size based on the information derived at Stage 1.


But let’s continue:

SCENARIO: For the sake of this example, we will use the following Stage 1 assump­tions:

  • We estimate a residual error of 15% CV and a ratio of the test/reference means of 0.90.
  • We will conduct Stage 1 with 20 subjects (10 per sequence).

Nice to call animals subjects, but these guys are adventurous, aren’t they? Let’s assume they don’t know that with a T/R ratio of 0.90 an αadj of 0.0294 is not sufficient any more (covering the grid of n1 12…60, CV 10…100%). My magic code tells me…

For adj. α of 0.0294 a maximum inflation
of 0.053753 is detected at CV=20% and n1=12.

… and suggests to use 0.0272 instead.*

[image]

However, let’s believe in Mr Pocock.

library(PowerTOST)
sampleN.TOST(alpha=0.0294, CV=0.15, theta0=0.9, design="2x2x2")
+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.0294, target power = 0.8
BE margins        = 0.8 ... 1.25
Null (true) ratio = 0.9,  CV = 0.15

Sample size (total)
 n     power
26   0.802288

power.TOST(alpha=0.0294, CV=0.15, theta0=0.9, n=20)
[1] 0.6850327


Brilliant! They must love a second stage. Some more stuff about inflation & power:

library(Power2Stage)
power.2stage(method="B", alpha=c(0.0294, 0.0294), n1=20,
   GMR=0.9, CV=0.15, targetpower=0.8, pmethod="nct",
   usePE=FALSE, Nmax=Inf, theta0=1.25, nsims=1e6)

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
GMR= 0.9 and mse of stage 1 in sample size est. used

1e+06 sims at theta0= 1.25 (p(BE)='alpha').
p(BE)   = 0.038733
p(BE) s1= 0.029144
pct studies in stage 2= 74.27%

Distribution of n(total)
- mean (range)= 27.4 (20 ... 80)
- percentiles
 5% 50% 95%
 20  26  42


With this CV, T/R, n1, and 0.0294 there will be no inflation. But that’s not true for other combinations. Overall power ~85% (~27 of studies will proceed to the second stage).

What if they want to adapt for the T/R of stage 1 (in the code above switch to usePE=TRUE)?

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+06 sims at theta0= 1.25 (p(BE)='alpha').
p(BE)   = 0.047555
p(BE) s1= 0.029144
pct studies in stage 2= 36.39%

Distribution of n(total)
- mean (range)= 203753.4 (20 ... 4375926130)
- percentiles
  5%  50%  95%
  20   20 5932


Still no inflation, but the sample sizes ring the alarm bell. Old story. Full adaption rarely ‘works’ in BE. This time the entire output for power:

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= Inf
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+05 sims at theta0= 0.9 (p(BE)='power').
p(BE)   = 0.948
p(BE) s1= 0.68365
pct studies in stage 2= 26.73%

Distribution of n(total)
- mean (range)= 17250.8 (20 ... 1117221800)
- percentiles
 5% 50% 95%
 20  20 160


Oops! In the worst case almost the entire population of India? In the spirit of Karalis / Macheras let’s add a futility criterion limiting the total sample size. OK, why not 100? No inflation; power:

Method B: alpha (s1/s2)= 0.0294 0.0294
Futility criterion Nmax= 100
CV= 0.15; n(stage 1)= 20; GMR= 0.9
BE margins = 0.8 ... 1.25
PE and mse of stage 1 in sample size est. used

1e+05 sims at theta0= 0.9 (p(BE)='power').
p(BE)   = 0.86378
p(BE) s1= 0.68365
pct studies in stage 2= 18.29%

Distribution of n(total)
- mean (range)= 27.3 (20 ... 100)
- percentiles
 5% 50% 95%
 20  20  68


Might work. Still not what the guidance wants – pre-specified sample size in both stages. Will they accept a maximum total sample size as a futility criterion? Does one have to perform the second stage in the pre-specified sample size even if the calculated one is lower?


  • Validation of chosen adjusted α (0.0272):
    “Method B”, assumed T/R 90%, target power 80%
               12       20       24       36       48       60
    0.1  0.031667 0.027057 0.026990 0.027151 0.027106 0.027074
    0.15 0.046101 0.036577 0.032393 0.027345 0.027106 0.027074
    0.2  0.049838 0.046210 0.043916 0.036062 0.029193 0.027194
    0.3  0.039896 0.048036 0.049104 0.047415 0.045185 0.042275
    0.4  0.030938 0.035728 0.040070 0.048512 0.048743 0.047437
    0.5  0.028471 0.029026 0.030577 0.038588 0.046841 0.049009
    0.6  0.027897 0.027882 0.028254 0.030001 0.036409 0.044488
    0.7  0.027576 0.027528 0.027789 0.027686 0.029420 0.034456
    0.8  0.027430 0.027329 0.027578 0.027432 0.027694 0.028885
    0.9  0.027352 0.027308 0.027633 0.027173 0.027348 0.027694
    1    0.027275 0.027276 0.027640 0.027302 0.027268 0.027485
    With an adj. α of 0.0272 a maximum inflation of 0.049838 is seen at CV=20% and n1=12.
    Significance limit for 1e+06 simulations: 0.05036; no significant inflation seen.
    The chosen adjusted α is justified within the assessed range of stage 1 sample sizes (12-60) and CVs (10-100%).

              12      20      24      36      48      60
    0.1  0.88303 0.94968 0.97642 0.99834 0.99992 1.00000
    0.15 0.83337 0.84816 0.85456 0.91422 0.96897 0.99007
    0.2  0.81034 0.82734 0.83347 0.83663 0.85491 0.90205
    0.3  0.77531 0.80634 0.80897 0.82105 0.82504 0.82862
    0.4  0.75923 0.78497 0.79476 0.81014 0.81775 0.82105
    0.5  0.75205 0.77630 0.78163 0.79811 0.80779 0.81514
    0.6  0.74976 0.77267 0.77753 0.78933 0.79713 0.80734
    0.7  0.74873 0.77240 0.77691 0.78657 0.79120 0.79915
    0.8  0.74768 0.77115 0.77602 0.78450 0.78980 0.79423
    0.9  0.74564 0.77041 0.77521 0.78571 0.78838 0.79215
    1    0.74505 0.77060 0.77473 0.78502 0.78649 0.79281
    With an adj. α of 0.0272 a minimum power of 0.74505 is seen at CV=100% and n1=12.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-07 11:53
(3460 d 10:15 ago)

@ Helmut
Posting: # 13659
Views: 36,958
 

 link of electronic supplement material?

Dear Helmut and Detlew,

Congratulation for the published article and great to share. However I cannot click to browse the 'Electronic supplement material' from the link of pdf file. Do I miss anything there?

❝ Took a while…

❝ You can expect more to come (2-group parallel ☐)…


Why jump into '2-group parallel'? What about replicate?

Thanks in advance.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
nobody
nothing

2014-10-07 12:59
(3460 d 09:09 ago)

@ yjlee168
Posting: # 13660
Views: 36,886
 

 link of electronic supplement material?

klick:

https://dx.doi.org/10.1208/s12248-014-9661-0

...down the page you see the files (.rft, .txt)

PS: I use noscript addon, ymmv

Kindest regards, nobody
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-07 13:35
(3460 d 08:33 ago)

@ nobody
Posting: # 13661
Views: 37,019
 

 Thanks!

Dear nobody,

Got it. Many thanks.

https://dx.doi.org/10.1208/s12248-014-9661-0

❝ ...down the page you see the files (.rft, .txt)


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
nobody
nothing

2014-10-07 13:44
(3460 d 08:24 ago)

@ yjlee168
Posting: # 13662
Views: 36,761
 

 Thanks!

...additional info: to make the R script in the .rtf file work, release one of the two options for mse from the comments ;-)

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-07 15:54
(3460 d 06:14 ago)

@ nobody
Posting: # 13665
Views: 36,921
 

 Direct links & hint

Hi nobody and all,

the direct links below (in the public domain; you don’t have to buy the paper) ;-)

Datasets:
  1. [image] 12248_2014_9661_MOESM1_ESM.txt
  2. [image] 12248_2014_9661_MOESM2_ESM.txt
  3. [image] 12248_2014_9661_MOESM3_ESM.txt
  4. [image] 12248_2014_9661_MOESM4_ESM.txt
  5. [image] 12248_2014_9661_MOESM5_ESM.txt
  6. [image] 12248_2014_9661_MOESM6_ESM.txt
  7. [image] 12248_2014_9661_MOESM7_ESM.txt
  8. [image] 12248_2014_9661_MOESM8_ESM.txt
SAS- and R-scripts:

❝ ...additional info: to make the R script in the .rtf file work, release one of the two options for mse from the comments ;-)


THX for pointing that out! Remove the hash # in

line 42: #mse <- summary(muddle)$sigma^2


As stated in the RTF’s title, note that the scripts have to be adjusted for the user’s directories and/or file names.
  • SAS:
    line 1:  file name
    line 5:  path
  • R:
    line 81: path
    I suggest to save the reference datasets in an empty directory. The script will analyse all with the *.txt extension.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-10-07 14:36
(3460 d 07:32 ago)

@ yjlee168
Posting: # 13663
Views: 37,024
 

 Replicated designs are relevant, too

Hello yjlee,

❝ Why jump into '2-group parallel'? What about replicate?


2-group parallel are probably still the second-most common BE design, I think, and that's to me quite a good argument for prioritising it.

As you say, replicate designs are also relevant, but it would open up a can of worms:
  1. EU all fixed effects (=normal linear model) versus US subject as random (=mixed model) and the EU vs US way of scaling.
  2. Various modes of imbalance (missing period data or different numbers in TRTR and RTRT etc)
  3. The use of denominator degrees of freedom as per Satterthwaite or something else; software limitations on the same matter.
Finally, I think noone that I know of doing BE is 100% well versed with the inner workings of mixed models or the syntactical wonders and settings of R, SAS, Nonlin, SPSS or whatever which to some extent would need deep understanding if we go the US route. With mixed models there will be quite a few potential choices to make and this is when the understanding matters a lot. For starters: Do we use REML and not ML because we have a read some webpages or books suggesting REML and not ML, or do we use REML because we have a deep understanding of the merits of REML and the lack of same for ML in the context of BE? Syntax for fits are complex regardless of language/package and how exp units are coded, e.g. "fm3 <- lmer(strength ~ (1|batch) + (1|batch:cask), Pastes)" and who really knows what all that "~" and "|" does for your ZGZt+R? If you use an option to do Kenward-Rogers are you then not compliant? And then there are the optimiser settings. And so forth.
Tricky, tricky, tricky.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-07 15:00
(3460 d 07:08 ago)

@ ElMaestro
Posting: # 13664
Views: 36,772
 

 Replicated designs are relevant, too

Dear Elmaestro,

❝ 2-group parallel are probably still the second-most common BE design, I think, and that's to me quite a good argument for prioritising it.


I see. Some posts about differences of replicate BE data analysis using WNL or SAS can be found in this forum.

❝ As you say, replicate designs are also relevant, but it would open up a can of worms:


Wow! Sounds scaring. But very true.

❝ [...] "fm3 <- lmer(strength ~ (1|batch) + (1|batch:cask), Pastes)" and who really knows what all that "~" and "|" does for your ZGZt+R? If you use an option to do Kenward-Rogers are you then not complicant? And then there are the optimiser settings. And so forth.


You're right. Still remains unsolved with R.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-07 16:47
(3460 d 05:21 ago)

@ yjlee168
Posting: # 13666
Views: 36,872
 

 RSABE: SAS and PHX/WNL only?

Hi Yung-jin,

❝ […] Some posts about differences of replicate BE data analysis using WNL or SAS can be found in this forum.

❝ […] Still remains unsolved with R.


So far I do not know how to set up reference-scaling (FDA+EMA) in any software except SAS and Phoenix/WinNonlin. I’m not overly optimistic for R (see here; still current as of July 2014).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-08 12:05
(3459 d 10:03 ago)

@ Helmut
Posting: # 13668
Views: 36,912
 

 for the future?

Dear Helmut and Elmaestro,

Thank you for your responses. I am thinking that your research works (incl. reference dataset and results of software validation) for BE/BA are very meaningful. Probably in some aspects (such as mixed model for replicate BE or RSABE) R is still not recognized (or acceptable) officially in BE/BA data analysis. However, if there are reference dataset and results of software validation available, it should be very helpful for program developers in the future. For me, I am still looking forward to seeing such reference dataset and software validation available for replicate BE, RSABE, TSD etc. (without R). It is not only for R package developers, but also for other statistical software developers such as SPSS/Minitab/Statistica etc.. Indeed, we cannot do it properly with R or other statistical software right now as you said. It does not mean that it cannot be solved in the future either. However, these reference dataset for all kinds of BE study design are absolutely required, IMHO.

Except NIST StRD Nonlinear Regression Datasets as mentioned previously in this thread, there is another good example for data mining dataset repository at UCI. [edited]

Quoted from Elmaestro 'I may be wrong, but...'

❝ So far I do not know how to set up reference-scaling (FDA+EMA) in any software except SAS and Phoenix/WinNonlin. I’m not overly optimistic for R (see here; still current as of July 2014).


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-09 02:26
(3458 d 19:42 ago)

@ yjlee168
Posting: # 13671
Views: 36,669
 

 Star Trek: The Next Generation

Dear Yung-jin,

I tried to compile the job for Lieutenant Commander Data. See all the possible methods to explore:
  • FDA
    • ABE
      • FA0(2): according to the guidance; sometimes a warning, sometimes no convergence
      • CSH: suggested as an alternative in the guidance; fails to converge sometimes as well
      • FA0(1): no problems with convergence in all datasets we have tested so far
    • RSABE
      • The wacky progesterone code…
        Proc Mixed for full replicates and Proc GLM for partial replicates
      • Direct calculation of the CI from intra-subject contrasts / ex­pan­sion of limits
  • EMA
    • ABE
      • all effects fixed
      • subjects random
    • ABEL
      • Method A: subjects fixed
      • Method B: subjects random
      • Method C: = FDA’s (same problems in partial replicates)
      • Assessment of outliers
If I counted correctly this are at least ten evaluations (depending on whether there are outliers detected or not) for just one dataset. Multiply by the number of datasets and software packages. Such a paper would be a monster. Or should we opt for a book instead?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-10-09 10:09
(3458 d 11:59 ago)

@ Helmut
Posting: # 13673
Views: 36,549
 

 Star Trek: The Next Generation

Dear Helmut,

just one nitpickin'

❝ ...

  • EMA

    • ABE
      • all effects fixed
      • subjects random
    • ABEL
      • Method A: subjects fixed
        ...

Regards,

Detlew
nobody
nothing

2014-10-09 10:23
(3458 d 11:45 ago)

@ Helmut
Posting: # 13674
Views: 36,659
 

 Star Trek: The Next Generation

...nope, this is classical open-source project for a forum/wiki with peer-review on-the-fly. Maybe in the end a scientific paper referencing data sets in the forum... :-D (I could be wrong, but...)

Kindest regards, nobody
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-09 13:06
(3458 d 09:02 ago)

@ Helmut
Posting: # 13675
Views: 36,836
 

 Star Trek: The Next Generation

Dear Helmut,

Have you ever tried to add/change options of MAXITER (default is only 50; may consider to increase this value) and / or MAXFUNC (default is 150) with PROC MIXED when getting convergence failure for ABE? I did that with lme() in bear recently for dataset that failed to converge using the default number of iterations. It solved the problem of convergence failure. The function optimization of PROC MIXED is the Newton-Raphson algorithm. I don't know if it works or not with PROC MIXED.

❝ [...]


FDA

❝      ABE

❝          FA0(2): according to the guidance; sometimes a warning, sometimes no convergence

❝          CSH: suggested as an alternative in the guidance; fails to converge sometimes as well [...]


❝          FA0(1): no problems with convergence in all datasets we have tested so far [...]


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2014-10-09 13:52
(3458 d 08:16 ago)

@ yjlee168
Posting: # 13676
Views: 37,805
 

 Some comments

Hi yung-jin,
Hi all,

a few comments so far:
  1. Just read the documentation for the Proc Mixed settings; looks to me like it isn't possible at all to control starter values for the optimiser? That would be rather silly, if you ask me.
    Anyways, if it is possible to select starter values then I'd propose to first fit the stuff with ML (sometimes-to-often has much better convergence) and then feed the fit values onto the REML optimiser as starter values.

  2. If someone can show me how to pass values by reference (~pointer) in R then I think I might be able to write R code for this stuff. Not good/solid/clever/smart/fast code at all, but code that does the job. Could take a bit of time, though.

To explain what I mean by passing values by reference consider this example:

Bar=function(X)
{
   X=X+1
   cat("Hello Mom!\n")
   return(0)
}


Foo=function(B)
{
   Bar(B) 
   cat("B is",B,"\n")

Foo(7)


- it will print 7 and not 8, as long as the function parameter is passed by value and not by reference.

I simply need a way for the argument to be passed by reference so that Bar can change it and Foo can continue doing stuff on B once the change has happened.

Plenty websites will discuss how to do it. But I am just ElMaestro and my brain is walnut-sized so I need more than a link to a website to understand it, please.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-09 14:18
(3458 d 07:50 ago)

@ ElMaestro
Posting: # 13677
Views: 36,569
 

 Some comments

Dear Elmaestro,

Exactly. No starting value can be set up with either PROC MIXED of SAS or lme() from package nlme of R. I still have no idea about which parameter is going to be optimized with both software.

❝ 1. Just read the documentation for the Proc Mixed settings; looks to me like it isn't possible at all to control starter values for the optimiser? That would be rather silly, if you ask me.[...]



Silly? maybe. Probably it is not possible to do that or is not necessary to do that. Otherwise, it should not be that way.

Need to sneak into and have a look at the source codes of package nlme. I did that before, but still with no luck to figure out yet. Even though we can find out, there is still nothing we can do. nlme is a very big and complicated package.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2014-10-10 10:35
(3457 d 11:33 ago)

@ yjlee168
Posting: # 13684
Views: 36,323
 

 Proc MIXED - start value

Dear Yung-jin, dear ElMaestro,

❝ Exactly. No starting value can be set up with either PROC MIXED of SAS or lme() from package nlme of R. ...


Objection, Your Honor! At least for SAS.
Just to cite from the SAS Help page of Proc MIXED:
"PARMS Statement

PARMS (value-list) ...</ options> ;

The PARMS statement specifies initial values for the covariance parameters, or it requests a grid search over several values of these parameters. ..."

For R I bet there is also some possibility. But don't know exactly.
Eventually the function nlme() is what you need.

Regards,

Detlew
ElMaestro
★★★

Denmark,
2014-10-10 13:44
(3457 d 08:24 ago)

@ d_labes
Posting: # 13685
Views: 36,338
 

 Proc MIXED - start value

Great d_labes,

PARMS (value-list) ...</ options> ;


Thou shalt set swt=0.3274, swr=0.2727, sbt=0.5401, sbr=0.4993 for these divine settings will make the reml optimiser converge.

And the Lord said unto John "Come forth, and receive eternal life".
But John came fifth, and won a toaster.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-11 23:09
(3455 d 22:59 ago)

@ d_labes
Posting: # 13688
Views: 36,576
 

 Proc MIXED - start value

Dear Detlew,

Thank you so much for point that out. Indeed, there is PARMS option for PROC MIXED. However, I still wonder how to set it up. Could you share the results (incl. codes) here with and without PARMS option to run PROC MIXED here? I don't have SAS and dataset to try this.:-D I was wondering the PARMS is for the fixed effects and random effects (the model), not for the covariance parameters.

❝ [...]

❝ Objection, Your Honor! At least for SAS.

❝ Just to cite from the SAS Help page of Proc MIXED:

❝ "PARMS Statement


PARMS (value-list) ...</ options> ;


The PARMS statement specifies initial values for the covariance parameters, or it requests a grid search over several values of these parameters. ..."


not quite sure about this.

❝ For R I bet there is also some possibility. But don't know exactly.

❝ Eventually the function nlme() is what you need.


I did find that there is an option (start) in nlme(). This start option is an optional numeric vector, or list of initial estimates for the fixed effects and random effects. If declared as a numeric vector, it is converted internally to a list with a single component fixed, given by the vector. The fixed component is required, unless the model function inherits from class selfStart, in which case initial values will be derived from a call to nlsList....

So I don't think that this option (start) is for setting the initial values of covariance parameters.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2014-10-11 23:18
(3455 d 22:50 ago)

@ yjlee168
Posting: # 13689
Views: 36,441
 

 Proc MIXED - start value

Hi Yung-jin,

(if I may chime in)

❝ However, I still wonder how to set it up. Could you share the results (incl. codes) here with and without PARMS option to run PROC MIXED here? I don't have SAS and dataset to try this.:-D I was wondering the PARMS is for the fixed effects and random effects (the model), not for the covariance parameters.


❝ ❝ The PARMS statement specifies initial values for the covariance parameters, or it requests a grid search over several values of these parameters. ..."



Means: The PARMS statement gives you control of starter values for the random effects - these are the covariances.


❝ I did find that there is an option (start) in nlme(). This start option is an optional numeric vector, or list of initial estimates for the fixed effects and random effects. If declared as a numeric vector, it is converted internally to a list with a single component fixed, given by the vector. The fixed component is required, unless the model function inherits from class selfStart, in which case initial values will be derived from a call to nlsList....


❝ So I don't think that this option (start) is for setting the initial values of covariance parameters.


This is a bit mumbojumbo. As above, inits for the random effects might be what you are looking for but I don't know is meant by the list with a single component fixed.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-09 16:08
(3458 d 06:00 ago)

@ yjlee168
Posting: # 13680
Views: 36,703
 

 Optimizer settings

Dear Yung-jin,

❝ Have you ever tried to add/change options of MAXITER (default is only 50; may consider to increase this value) and / or MAXFUNC (default is 150) with PROC MIXED when getting convergence failure for ABE?


I’m not gifted with [image]
In Phoenix the default maximum iterations are also 50. Increasing doesn’t help, since I never have seen more than seven for datasets ending up with a warning:

Newton's algorithm converged with a modified Hessian, indicating that the model may be over-specified. The output is suspect. The algorithm con­verged, but not at a local minimum. It may be in a trough, at a saddle point, or on a flat surface.


❝ No starting value can be set up with either PROC MIXED of SAS or lme() from package nlme of R.


Datasets failing to converge in Phoenix:

Initial variance matrix is not positive definite. This indicates an invalid starting point for the algorithm.

However, initial values in Phoenix REML are obtained by the Method of Mo­ments (MoM). It is not possible to set them “manually”.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2014-10-09 17:32
(3458 d 04:36 ago)

@ Helmut
Posting: # 13681
Views: 36,542
 

 Optimizer settings

Does that remind me of something... NONMEM?

Kindest regards, nobody
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2014-10-09 20:14
(3458 d 01:54 ago)

@ Helmut
Posting: # 13682
Views: 36,473
 

 Optimizer settings

Dear Helmut,

❝ I’m not gifted with [image]


OK.

❝ [...]

❝ Datasets failing to converge in Phoenix:

Initial variance matrix is not positive definite. This indicates an invalid starting point for the algorithm.

However, initial values in Phoenix REML are obtained by the Method of Mo­ments (MoM). It is not possible to set them “manually”.


If this is the massage for convergence failure, then it is not caused by "reaching its max. iterations..." as lme(). Is there any random effect specified or are all effects fixed in model for this situation? Could you try with all effects fixed to see if it can be improved?

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-09 20:34
(3458 d 01:34 ago)

@ yjlee168
Posting: # 13683
Views: 36,499
 

 FDA’s model…

Dear Yung-jin,

❝ […] it is not caused by "reaching its max. iterations..." as lme().


Exactly.

❝ Is there any random effect or are all effects fixed in model for this situation? Could you try with all effects fixed to see if it can be improved?


Nope. I happens (only for some nasty datasets) in a partial replicate design (full replicates are OK) if following FDA’s mixed effects model for ABE:

PROC MIXED
  data=pk;
  CLASSES SEQ SUBJ PER TRT;
  MODEL LAUCT = SEQ PER TRT/ DDFM=SATTERTH;
  RANDOM TRT/TYPE=FA0(2) SUB=SUBJ G;
  REPEATED/GRP=TRT SUB=SUJ;
  ESTIMATE 'T vs. R' TRT 1 -1/CL ALPHA=0.1;
  ods output Estimates=unsc1;
  title1 'unscaled BE 90% CI - guidance version';
  title2 'AUCt';
run;

data unsc1;
  set unsc1;
  unscabe_lower=exp(lower);
  unscabe_upper=exp(upper);
run;


The problem lies in the random section where the structure of the covariance structure is specified with TYPE=FA0(2). CSH or FAO(1) helps. I don’t see how to go with ‘all fixed effects’ here…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-11-03 15:54
(3433 d 05:14 ago)

@ Helmut
Posting: # 13826
Views: 35,824
 

 Just received a letter…

… from

ThermoFischer
S C I E N T I F I C

Quote:

           Upon investigation, our development team has confirmed this aspect of the Article. Specifically, as noted in the user documentation, Kinetica software does employ an algorithm for a balanced design in both the case of balanced and unbalanced studies. Therefore, any results from unbalanced studies would be inaccurate. Although individual cases may differ, our preliminary assessment suggests that in most cases this error would result in only a small difference in the overall results.
           As a leading supplier to the medical and life science research community, the reliability and quality of our products is of the utmost importance to us. We are currently implementing a software update and will be performing rigorous testing to ensure that this issue is resolved. We expect to complete the software update within four to eight weeks and will notify you as soon as it becomes available


Download a scan here.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-12-09 16:51
(3397 d 04:17 ago)

@ Helmut
Posting: # 14037
Views: 35,357
 

 11 Reference Datasets (2-group parallel ☑)

Dear Elba and all,

❝ You can expect more to come (2-group parallel ☐)…


… the story continues.*



Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
73 visitors (0 registered, 73 guests [including 8 identified bots]).
Forum time: 21:08 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5