Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-07-12 19:47
(2337 d 06:53 ago)

Posting: # 19039
Views: 19,549
 

 Validation of PhEq_bootstrap [Software]

Dear all,

I was asked whether it is possible to validate PhEq_bootstrap (see this post). Sure. Since the source-code is available and if one is familiar with Object Pascal even a “white-box” validation is doable. :-D
I had a quick look based on the example data sets given by Shah et al.* (which is referred in the software’s documentation).

I got for the 90% CI (PhEq_bootstrap v1.2, 2014-06-22, 64bit Windows):

        500 Bootstraps              1,000 Bootstraps
# Shah et al.   PhEq_bootstrap  Shah et al.   PhEq_bootstrap
1 52.79, 68.15   53.10, 67.65   53.01, 68.34   52.82, 67.85
2 48.33, 53.68   48.01, 53.50   48.25, 53.69   48.16, 53.64
3 48.56, 54.10   48.40, 54.49   48.54, 54.56   48.12, 54.14
4 48.39, 51.55   48.23, 51.45   48.38, 51.59   48.31, 51.51
5 46.11, 50.09   45.90, 49.69   46.05, 50.04   46.00, 50.00


Good news. With one exception (test batch 1, 500 bootstraps) the lower CL is more conservative than the ones reported by Vinod. With 1,000 bootstraps results are always slightly more conservative.

Now for the bad news. There is one thing stupid in the software when it comes to validation (line 535 of the source Conv_to_learn1.pas):

RandSeed:=random(random(1000000))+random(random(1000000));

That means that the seed of the (pseudo)random number generator is always a random number itself. Therefore, it is impossible to reproduce runs. Try it: you will not get exactly my results.
IMHO, that’s bad coding practice. In our [image] packages (PowerTOST, Power2Stage) we have an option to work either with a fixed or a random seed. With the former (which is the default) it is possible to reproduce a given run. Detlew’s randomizeBE is even more flexible. You can work with a random seed and the output gives the seed used. Then you can use this seed as an argument to reproduce the run. Clever.

OK, how reproducible are runs of PhEq_bootstrap?

run      5,000 Bootstraps    25,000 Bootstraps
         ƒ2*     90% CI       ƒ2*     90% CI
 1     59.793 52.77, 67.66  59.839 52.81, 67.72
 2     59.786 52.75, 67.70  59.840 52.81, 67.72
 3     59.864 52.82, 67.71  59.839 52.81, 67.72
 4     59.829 52.77, 67.75  59.840 52.81, 67.72
 5     59.840 52.81, 67.70  59.839 52.81, 67.72
 6     59.850 52.82, 67.73  59.843 52.81, 67.72
 7     59.823 52.77, 67.71  59.842 52.81, 67.72
 8     59.843 52.84, 67.70  59.839 52.81, 67.72
 9     59.859 52.77, 67.75  59.836 52.81, 67.72
10     59.803 52.75, 67.70  59.841 52.81, 67.72
11     59.864 52.82, 67.75  59.836 52.81, 67.72
12     59.805 52.75, 67.71  59.840 52.81, 67.72
13     59.860 52.84, 67.72  59.836 52.81, 67.72
14     59.843 52.77, 67.73  59.839 52.81, 67.72
15     59.808 52.77, 67.66  59.840 52.81, 67.72
min    59.786 52.75, 67.66  59.836 52.81, 67.72
Q I    59.806 52.77, 67.70  59.839 52.81, 67.72
med    59.840 52.77, 67.71  59.839 52.81, 67.72
Q III  59.855 52.82, 67.73  59.840 52.81, 67.72
max    59.864 52.84, 67.75  59.843 52.81, 67.72
CV (%) 0.0449 0.063  0.041  0.0034 0.000  0.000

Not so nice. With 5,000 bootstraps the range of the lower CL is 0.07.
Imagine that you claimed similarity (ƒ2 50) and an assessor asks you to demonstrate how you obtained it. You fire up the software with its default of 5,000 bootstraps and are slapped in the face with:

f2* = 49.93  Similarity NOT confirmed: Lower CI is below the limit (f2 = 50)

Oops!
Bootstrapping converges to the true value with increasing repetitions. If you want to get reproducible results you have to use a workaround and go for 25,000+ bootstraps. Then the results are stable but you have to accept ~80MB result-files…
Of course, this is only valid for these data sets (12 units, 4 sampling time points). Good luck with more time points.


  • Shah VP, Tsong Y, Sathe P, Liu J-P. In Vitro Dissolution Profile Comparison—Statistics and Analysis of the Similarity Factor, f2. Pharm Res. 1998;15(6):889–96. doi:10.1023/A:1011976615750.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2018-07-13 00:21
(2337 d 02:19 ago)

@ Helmut
Posting: # 19040
Views: 16,731
 

 Validation of PhEq_bootstrap

Hi Hötzi,

❝ Bootstrapping converges to the true value with increasing repetitions. If you want to get reproducible results you have to use a workaround and go for 25,000+ bootstraps. Then the results are stable but you have to accept ~80MB result-files…

❝ Of course this is only valid for these data sets (12 units, 4 sampling time points). Good luck with more time points.


The magnitude of the variability of the estimate is dependent on the number of bootstraps and the within- and between-variability. This is just the nature of it.

What I don't quite get is how the need for 80 Mb files (files, really???) arises. For 25000 bootstraps you should only need to allocate e.g. 25000x a double datatype (8 bytes) for the storage of f2*, which isn't even a megabyte from the heap. But note I did not care to read any of the Pascal source code. However, speed or memory consumption is a mundane issue. If the calculation is proper then no need to whine at all.

When I do this in C, I allocate 8 Mbs from the heap, and it takes a blink of an eye or two with a million bootstraps and sorting, and no intermediary files anywhere, and it provides derivation of the bootstrap CI in various flavours: Raw, bias corrected, and bias corrected and accelerated.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-07-13 03:29
(2336 d 23:11 ago)

@ ElMaestro
Posting: # 19041
Views: 16,670
 

 Validation of PhEq_bootstrap

Hi ElMaestro,

❝ The magnitude of the variability of the estimate is dependent on the number of bootstraps and the within- and between-variability. This is just the nature of it.


Yessir, rightyright!

❝ What I don't quite get is how the need for 80 Mb files (files, really???) arises.


Dokumentation, darling! The file contains not only the four estimates (ƒ1, ƒ2, unbiased ƒ2, ƒ2*) in full precision for the X bootstraps but also the raw data (+average, variance) of all samples. That’s a fucking lot.

❝ For 25000 bootstraps you should only need to allocate e.g. 25000x a double datatype (8 bytes) for the storage of f2*, which isn't even a megabyte from the heap.


Sure. The memory consumption is low, regardless which software you use.

❝ But note I did not care to read any of the Pascal source code.


No need for that. Download the stuff and give it a try.

❝ However, speed or memory consumption is a mundane issue. If the calculation is proper then no need to whine at all.


ACK. The code is reasonably fast. The documentation states “Please bear in mind that the algorithm is very demanding in terms of CPU time, therefore observe progress bar at the bottom and be patient…” On my machine 25,000 bootstraps took ~3 seconds.

❝ When I do this in C, I allocate 8 Mbs from the heap, and it takes a blink of an eye or two with a million bootstraps and sorting, …


Oh dear! On my machine 1 mio took about one hour with ~90% of the time spent on sorting.
My Pascal is rusty but lines 162–181 of the source stinks of Bubble Sort. In [image] I would rather opt for a parallelized version of Quicksort.

❝ … and no intermediary files anywhere, …


Making validation interesting. 1 mio produced a nice 3 GB result file and a 63 MB log-file. Print the ~18 mio lines of the result file and add an impressive amount of paper to the dossier.

❝ … and it provides derivation of the bootstrap CI in various flavours: Raw, bias corrected, and bias corrected and accelerated.


Congratulations!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2018-07-13 12:14
(2336 d 14:26 ago)

@ Helmut
Posting: # 19043
Views: 16,584
 

 Validation of PhEq_bootstrap

Hi Hötzi,

❝ Dokumentation, darling! The file contains not only the four estimates (ƒ1, ƒ2, unbiased ƒ2, ƒ2*) in full precision for the X bootstraps but also the raw data (+average, variance) of all samples. That’s a fucking lot.


Yes, that is a lot. Perhaps a lot of time is spent writing the datasets to file streams and flushing file buffers.

Still, let us say 6 time points, 12 chambers, 2 formulation: ~144 Mb, plus the array of f2*. Allocating that sort of memory should not be an issue ever on newer hardware. The browser you have opened right now to read this post consumes a lot more, probably by a factor 4-10 :-D

What is the use of having all that info cached??

Under bootstrapping the i'th dataset is as valid as the j'th dataset where (i,j) is in ([0;B], [0;B]). When I write resampling code I usually just extract a few dataset and print e.g. the 111'th sample or whatever and check that it has the right set of elements (i.e. that the 111'th dataset (or whatver) for Test only has Test values at the 'right' time points and so forth). And I can check that the derived f2* is one I can reproduce by manual calculation in another application. It isn't such a lot of work. But if I had to do that for a million datasets, then I'd be very, very late for dinner.

What I find little more tricky is to convince myself that the derivation of CI's from the vector of f2* values is valid. A key element here is the sorting. What I did once was to output the sorted first 65536 rows of the sorted array and import them in a spreadsheet (if I recall correctly 65536 is the maximum number of rows in my spreadsheet). Then copy those values to another column, sort this column, and check if row difference exist. A row difference would indicate a sorting mishap in one of the applications. Never observed it, fortunately.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-07-13 14:54
(2336 d 11:46 ago)

@ ElMaestro
Posting: # 19045
Views: 16,662
 

 Validation of PhEq_bootstrap

Hi ElMaestro,

❝ ❝ Dokumentation, darling! […]

❝ Yes, that is a lot. Perhaps a lot of time is spent writing the datasets to file streams and flushing file buffers.


This was my first guess as well. I assumed that the author used a HD and the 25,000 on my machine were so fast because I have SSDs. Then I inspected the source and observed what happened with the 1 mio bootstraps. The file is allocated with the computed size, then bootrapping is performed, the array sorted, and at the end written to the file. No disk IO, everything in RAM.

❝ […] ~144 Mb, plus the array of f2*. Allocating that sort of memory should not be an issue ever on newer hardware.


Sure.

❝ What is the use of having all that info cached??


Cause at the end it has to be written to the file. Not my code but I would include options to specify what – if anything at all – has to be documented.

❝ Under bootstrapping the i'th dataset is as valid as the j'th dataset where (i,j) is in ([0;B], [0;B]). When I write resampling code I usually just extract a few dataset and print e.g. the 111'th sample or whatever and check that it has the right set of elements (i.e. that the 111'th dataset (or whatver) for Test only has Test values at the 'right' time points and so forth).


Yep.

❝ What I find little more tricky is to convince myself that the derivation of CI's from the vector of f2* values is valid. A key element here is the sorting.


Not only that. Getting the CI from the array index needs interpolation. More further down.

❝ What I did once was to output the sorted first 65536 rows of the sorted array and import them in a spreadsheet (if I recall correctly 65536 is the maximum number of rows in my spreadsheet). Then copy those values to another column, sort this column, and check if row difference exist. A row difference would indicate a sorting mishap in one of the applications.


Checking sorting? Kudos!
Tricky is the CI. Try it with a very small data set and use the spreadsheet’s PERCENTILE() function. I was suspicious when I looked at the source of PhEq_bootstrap whether it will be the same.
Example in [image]:

x      <- c(51.6730571183459, 52.3852052174518, 51.4902456409739,
            52.1787527612649, 50.9739796238878, 52.1273407615101,
            52.6282854486968, 51.5029428204135, 51.5056701956140,
            52.5994758210502, 52.3035556756045, 51.6613007946535)
n      <- length(x)
mean.x <- mean(x)
var.x  <- var(x)
sd.x   <- sqrt(var.x)
alpha  <- 0.05 # for two-sided 90% CI
CI.col <- c(paste0(100*alpha, "%"), paste0(100*(1-alpha), "%"))
err.z  <- qnorm(1-alpha)*sd.x/sqrt(n)      # based on normal distribution
err.t  <- qt(1-alpha, df=n-1)*sd.x/sqrt(n) # based on t-distribution
CI.z   <- mean.x + c(-1, +1)*err.z
CI.t   <- mean.x + c(-1, +1)*err.t
names(CI.t) <- names(CI.z) <- CI.col
CI.rep <- c(lower=50.97, upper=52.60) # reported 90% CI
result <- matrix(c(CI.z, CI.t, rep(FALSE, 2), rep(FALSE, 2)), nrow=2, ncol=4,
                 dimnames=list(c("CI.z", "CI.t"), c(CI.col, "lo", "hi")))
for (type in 1:9) {
  CI.boot.type. <- c(quantile(x, probs=c(alpha, 1-alpha), type=type),
                     rep(FALSE, 2))
  result <- rbind(result, CI.boot.type.)
}
row.names(result)[3:11] <- paste0(row.names(result)[3:11], 1:9)
for (j in 1:nrow(result)) {
  if (round(as.numeric(result[j,  "5%"]), 2) == CI.rep[1])
    result[j, "lo"] <- TRUE
  if (round(as.numeric(result[j, "95%"]), 2) == CI.rep[2])
    result[j, "hi"] <- TRUE
}
print(as.data.frame(result)) # 1=match, 0=no match

                     5%      95% lo hi
CI.z           51.67159 51.64886  0  0
CI.t           52.16671 52.18944  0  0
CI.boot.type.1 50.97398 52.62829  1  0
CI.boot.type.2 50.97398 52.62829  1  0

CI.boot.type.3 50.97398 52.59948  1  1
CI.boot.type.4 50.97398 52.61100  1  0
CI.boot.type.5 51.02561 52.62540  0  0
CI.boot.type.6 50.97398 52.62829  1  0

CI.boot.type.7 51.25793 52.61244  0  0
CI.boot.type.8 50.97398 52.62829  1  0
CI.boot.type.9 50.97398 52.62829  1  0

R and SAS provide various types of interpolations. PhEq_bootstrap uses “Type 3” (in [image] lingo) which, AFAIK, is also the default in SAS (PCTLDEF=5). All spreadsheets I have tested (Excel, OO Calc, Gnumeric) use “Type 7”. Of course, with already moderate samples differences will practically disappear. But checking sumfink like this is the fun of software validation, right?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2018-07-13 16:28
(2336 d 10:12 ago)

@ Helmut
Posting: # 19048
Views: 16,312
 

 Validation of PhEq_bootstrap

Hi again,

re. the index: A simple solution is not to sample 1000000 times but 999999. Another solution is to sample so many times that the result is the same regardless of whether you round up or down when finding the index of the 5% percentile etc. This, however, takes a bit of time to figure out.

Pass or fail!
ElMaestro
Shuanghe
★★  

Spain,
2018-07-17 17:54
(2332 d 08:46 ago)

@ Helmut
Posting: # 19077
Views: 16,392
 

 Validation of PhEq_bootstrap

Hi Helmut!

❝ Now for the bad news. There is one thing stupid in the software when it comes to validation (line 535 of the source Conv_to_learn1.pas):

RandSeed:=random(random(1000000))+random(random(1000000));

That means that the seed of the (pseudo)random number generator is always a random number itself. Therefore, it is impossible to reproduce runs. Try it: you will not get exactly my results.


What a coincidence! I was in a meeting recently with a regulatory agency and the issue of reproducibility came up since the agency couldn't reproduce the bootstrap f2 values we reported in the dossier. My first thought was the different seed number. I opened the source files one by one and searched words like random, seed, to get the line. :-D

❝ IMHO, that’s bad coding practice.


Indeed. End user should be able to specify the seed number to be able to reproduce the analysis. I always leave a variable for seed number in my SAS code for bootstrap of BE data.

All the best,
Shuanghe
d_labes
★★★

Berlin, Germany,
2018-08-15 11:45
(2303 d 14:55 ago)

@ Helmut
Posting: # 19168
Views: 16,263
 

 PhEq_bootstrap in R

Dear Helmut, dear All,

just found:
bootf2BCA, a R equivalent of PhEq_bootstrap.
Contained are scripts for the R console and a GUI based on the shiny package.

May be better suited for validation since the source code may be modified to fit your needs, without access to the rarely used Object Pascal. E.g. introduce a setting of the seed to assure the same results between different runs.

Be warned: The source code is hard to read, nearly no comments at all.

Be further warned: If you rise the number of bootstrap samples (default is 1000, too low IMHO) the result file will contain a huge number of entries, each bootstrap sample is recorded! The same mess as in the 'Object Pascal' implementation. But could of course also changed with a little knowledge of R.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-10-05 02:59
(1887 d 23:41 ago)

@ d_labes
Posting: # 20665
Views: 14,403
 

 bootf2BCA v1.1

Dear all,

we discussed the code at BioBridges…

The problem with reproducibility can easily be solved.
In the script F2_boot.R after line 270 #main algorithm add a new line

set.seed(X)

where X is an integer (specified in the protocol and report). Use

c(-1L, +1L) * .Machine$integer.max

to check the possible range. On my machine it gives

[1] -2147483647  2147483647

However, I suggest a simple one (42?).

❝ Be warned: The source code is hard to read, nearly no comments at all.


The current version 1.2 is a little bit better. Still heavy stuff.

The cut-off preferred by the EMA is implemented:

[image]

Don’t get confused by Q>=85%. That’s typo. Actually it is the correct >85%.
Still missing: One value >85% for the test (FDA) or one value >85% for the comparator (WHO).

❝ Be further warned: If you rise the number of bootstrap samples (default is 1000, too low IMHO) the result file will contain a huge number of entries, each bootstrap sample is recorded! The same mess as in the 'Object Pascal' implementation. But could of course also changed with a little knowledge of R.


Hint: In F2_boot.R comment out the loop in lines 317–329. Then the report-file shrinks from a couple of megabytes to some kilobytes. I would keep the rest for documentation. Useful to read the bootstrapped ƒ2 values from the file and generate your own histogram (density instead of counts). I don’t like the one of [image] plotly. Way too many bins for my taste (here an example with 5,000 bootstraps). \(\small{\textrm{ceiling}(\sqrt n)}\) gives 71 bins and smells of Excel.

[image]


I prefer the Freed­man–Dia­conis rule* and show additionally the median and 90% CI (green line if the lower CL ≥50).

[image]



  • Even if you specify hist(..., breaks="Freed­man–Dia­conis") this is not what you might expect from the Freedman–Diaconis rule \(width=2\frac{\textrm{IQR}(x)}{\sqrt[3]{n}}\) and \(bins=[\textrm{max}(x)-\textrm{min}(x)]/width\).
    The man page of [image] hist states:
      […] the number is a suggestion only; as the breakpoints will be set to [image] pretty values.
    In my example I got 34 bins instead of the desired 47 (IQR=2.9026, n=5,000, min=55.76, max=71.73). Workaround (x are the bootstrapped ƒ2-values):
      width <- 2*IQR(x)/length(x)^(1/3)
      bins  <- as.integer(diff(range(x)) / width)
      hist(x, breaks = seq(min(x), max(x), length.out = bins), freq = FALSE)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2020-09-25 21:37
(1531 d 05:04 ago)

@ Helmut
Posting: # 21940
Views: 11,359
 

 bootf2BCA v1.2: seed

Dear all,

Alfredo García-Arieta (Spanish agency AEMPS) notified me that in v1.2 there is no need any more for the workaround to get a fixed seed. Don’t   Download   from the top-folder at SourceForge. That’s still v1.1 of October 2018. Instead
  • navigate to Files
  • click on [image] and then
  • download bundle_bootf2BCA_v1.2.zip of May 2019.
Then you can specify a seed (a fixed one of 100 is the default):

[image]


At the end of report.txt the seed is given for reproducibility.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
PTM139
☆    

India,
2021-04-16 09:53
(1328 d 16:47 ago)

@ Helmut
Posting: # 22309
Views: 9,700
 

 bootf2BCA v1.2: seed

Dear Helmut,

Although downloaded v 1.2 but it is not working from me. GUI is not opening as it happens with Ph_Eq bootstrap. Any specific steps to be followed?

Getting below message when I open GUI file:
#!/bin/sh
R CMD BATCH --slave ./run_shiny.R   &
exit 0


Regards,
Jayant


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5[Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-04-16 12:50
(1328 d 13:50 ago)

@ PTM139
Posting: # 22310
Views: 9,655
 

 Which packages, working directory?

[image]Hi Jayant,

❝ […] not working …


… is not a proper description of a software-related problem. I don’t have a crystal ball. Operating system, release of [image], installed packages and their versions, working directory?

Install devtools from CRAN and run this script:

getwd()
options(width = 60)
devtools::session_info()

You need five packages to run bootf2BCA: boot, ggplot2, plotly, rlang, and shiny. Their current versions are: boot 1.3-27, ggplot2 3.3.3, plotly 4.9.3, rlang 0.4.10, shiny 1.6.0. If one or more are missing our outdated, see this article about installation/updating of packages.
Alternatively you can execute this script:

update.packages(checkBuilt = TRUE, ask = FALSE)
install.packages(c("boot", "ggplot2", "plotly", "rlang", "shiny"),
                 repos = "https://cloud.r-project.org")

If after the first command a window with the question

[image] Do you want to install from sources the packages which need
 compilation?


pops up, click on     No     (the central button – it’s actual label depends on the locale of your operating system).

Your working directory (as reported by getwd()) has to be the one where the scripts of bootf2BCA are (the data files can be somewhere else). If not, in the menu bar of the [image] console:

File → Change dir… {navigate to the desired folder} and confirm with   OK  


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
PTM139
☆    

India,
2021-04-27 15:02
(1317 d 11:38 ago)

@ Helmut
Posting: # 22330
Views: 9,229
 

 Which packages, working directory?

Thank you for your prompt reply. I will follow the instructions and try to install all the packages.

Regards,
Jayant
Rayhope
☆    

India,
2022-03-28 16:35
(982 d 10:05 ago)

@ Helmut
Posting: # 22881
Views: 6,655
 

 bootf2BCA v1.2: seed

❝ Alfredo García-Arieta (Spanish agency AEMPS) notified me that in v1.2 there is no need any more for the workaround to get a fixed seed.


Currently V 1.3 is available and i guess they fixed lot of issues in current version.
@Helmut I am using V1.3 and in software bundle source code was not found. any chance to get source code if its available with you.
I have verified the results with Pheq 1.2 Vs Bootf2bca v 1.3 and found similar result using article examples (Comparison of free software platforms for the calculation of the 90% confidence interval of f2 similarity factor by bootstrap analysis
L. Nocea,b, L. Gwazac, V. Mangas-Sanjuana,b,⁎, A. Garcia-Arietad. European Journal of Pharmaceutical Sciences.)
Very minor differences which corrected with rounding off.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-28 16:48
(982 d 09:52 ago)

@ Rayhope
Posting: # 22882
Views: 6,816
 

 bootf2BCA v1.3

Hi Rayhope,

❝ Currently V 1.3 is available and i guess they fixed lot of issues in current version.


Unfortunately it does not contain a NEWS-file (which would be common on CRAN and GitHub). If you are interested, you could only compare the source with the one of v1.2.

❝ @Helmut I am using V1.3 and in software bundle source code was not found. any chance to get source code if its available with you.


Use this link to download the zip-file.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Rayhope
☆    

India,
2022-03-30 08:34
(980 d 18:06 ago)

@ Helmut
Posting: # 22885
Views: 6,606
 

 bootf2BCA v1.3

❝ Use this link to download the zip-file.


Thanks Helmut for the link.
Flam
☆    

New Zealand,
2023-06-21 05:41
(532 d 20:59 ago)

@ Helmut
Posting: # 23609
Views: 4,534
 

 bootf2BCA v1.1

❝ Hint: In F2_boot.R comment out the loop in lines 317–329. Then the report-file shrinks from a couple of megabytes to some kilobytes. I would keep the rest for documentation. Useful to read the bootstrapped ƒ2 values from the file and generate your own histogram (density instead of counts). I don’t like the one of [image] plotly. Way too many bins for my taste (here an example with 5,000 bootstraps). \(\small{\textrm{ceiling}(\sqrt n)}\) gives 71 bins and smells of Excel.


I am a first time R user trying to use bootf2BCA v1.3, how can I access the bootstrapped f2 values to draw the histogram etc.?
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-06-23 15:47
(530 d 10:54 ago)

@ Flam
Posting: # 23613
Views: 4,478
 

 bootf2BCA v1.3.1 (current since 2022-06-25)

Hi Flam,

❝ I am a first time R user trying to use bootf2BCA v1.3, how can I access the bootstrapped f2 values to draw the histogram etc.?


Before I start to adapt my old workaround to bootf2BCA_v1.3.1, did you succeed in specifying the result file in shiny?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Flam
☆    

New Zealand,
2023-06-27 01:16
(527 d 01:24 ago)

@ Helmut
Posting: # 23615
Views: 4,450
 

 bootf2BCA v1.3.1 (current since 2022-06-25)

❝ Before I start to adapt my old workaround to bootf2BCA_v1.3.1, did you succeed in specifying the result file in shiny?


No, I have not been able to install shiny.


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5[Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-06-27 13:47
(526 d 12:53 ago)

@ Flam
Posting: # 23620
Views: 4,272
 

 Install required packages first

Hi Flam,

❝ No, I have not been able to install shiny.


That’s not a proper description of the problem. Operating system, version of R, …?
For the future, see Eric Steven Raymond’s essay “How To Ask Questions The Smart Way”;-)

Execute this script in the [image]-console (not in a GUI like RStudio):

update.packages(checkBuilt = TRUE, ask = FALSE)
install.packages(c("boot", "ggplot2", "plotly", "rlang", "shiny"),
                 repos = "https://cloud.r-project.org")

If after the first command a window with the question

[image] Do you want to install from sources the packages which need
compilation?


pops up, click on No .

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Rayhope
☆    

India,
2021-07-29 10:49
(1224 d 15:51 ago)

@ Helmut
Posting: # 22485
Views: 8,679
 

 Validation of PhEq_bootstrap

❝ I was asked whether it is possible to validate PhEq_bootstrap (see this post). Sure. Since the source-code is available and if one is familiar with Object Pascal even a “white-box” validation is doable. :-D


Hi Helmut,
similar query received from agency for validation of PhEq results. Can you help me out with simple steps to do that. I dont know about programming or validation of software. I read all conversations related to your post, I came to conclusion that Validation of PhEq and BootF2CA is not at all my cup of tea. But need to reply on Agency query how to proceed. Please guide me.
Also let me know if you can provide SAS code for bootstrap mentioned in article of Shah et al. (In vitro Dissolution Profile Comparison-statistics and Analysis of similarity Factor F2, Pharmaceutical Research Vol 15, no.6 1998).
Disclaimer: I don't have any basic knowledge of Coding in any language.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-08-02 20:12
(1220 d 06:28 ago)

@ Rayhope
Posting: # 22499
Views: 8,548
 

 Validation of PhEq_bootstrap

Hi Rayhope,

❝ similar query received from agency for validation of PhEq results.


May I ask: Which one?

❝ Can you help me out with simple steps to do that. I dont know about programming or validation of software. I read all conversations related to your post, I came to conclusion that Validation of PhEq and BootF2CA is not at all my cup of tea.


OK, understandable. You have to hire a [ contract killer ] consultant knowledgeable with [image] and – preferably – solid background in ƒ2 comparisons and bootstrapping.

❝ But need to reply on Agency query how to proceed.


Maybe it is sufficient to point to this* goody? Luther Gwaza works for the WHO and Alfredo García-Arieta for the Spanish agency AEMPS. He is also member of the CHMP’s Pharmacokinetics Working Party. A quote from the paper’s Discussion:

[…] the results demonstrate the adequacy of PhEq Bootstrap and Bootf2bca software to replicate the observed results (Shah et al., 1998). As expected, the 90% CI were more similar among the software considered when the number of bootstrap replicates increased, although the observed differences could be explained by the seed number considered by each computer and the computer processor. The external validation shows that similarity conclusion for each comparison was irrespective of the software considered.


As shown in this post, in the current version of bootf2BCA a fixed seed of 100 is used by default and documented in the output. Hence, results are reproducible – independent from the machine and the operating system.

❝ Also let me know if you can provide SAS code for bootstrap […]


Sorry, I’m not equipped with [image]


  • Noce V, Gwaza L, Mangas-Sanjuan V, García-Arieta A. Comparison of free software platforms for the calculation of the 90% confidence interval of f2 similarity factor by bootstrap analysis. Eur J Pharm Sci. 2020; 146: 10529. doi:10.1016/j.ejps.2020.105259.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,332 posts in 4,899 threads, 1,660 registered users;
21 visitors (0 registered, 21 guests [including 9 identified bots]).
Forum time: 01:41 CET (Europe/Vienna)

I don’t write drafts.
I write from the beginning to the end,
and when it’s finished, it’s done.    Clifford Geertz

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5