Ohlbe
★★★

France,
2021-09-16 23:41
(376 d 09:44 ago)

Posting: # 22575
Views: 4,073
 

 Another two... [GxP / QC / QA]

Dear all,

It looks like the FDA has just keelhauled 2 CROs again: Panexcell and Synchron Research. The FDA will reject all data from both CROs due to data manipulation. All Panexcell data had already been rejected in the EU.

The FDA letters are very explicit regarding what they have found, and their conclusion is crystal clear:

Put simply, because you have been responsible for the creation of false data in the studies discussed here, we have no reason to believe that any data that you have produced are reliable. Thus, FDA has determined that all study data from all studies conducted at your firm must be rejected.

Regards
Ohlbe
ElMaestro
★★★

Denmark,
2021-09-17 08:42
(376 d 00:43 ago)

@ Ohlbe
Posting: # 22576
Views: 3,688
 

 Another two...

Yikes Ohlbe,

» It looks like the FDA has just keelhauled 2 CROs again: Panexcell and Synchron Research. The FDA will reject all data from both CROs due to data manipulation. All Panexcell data had already been rejected in the EU.
»
» The FDA letters are very explicit regarding what they have found, and their conclusion is crystal clear:
»
» Put simply, because you have been responsible for the creation of false data in the studies discussed here, we have no reason to believe that any data that you have produced are reliable. Thus, FDA has determined that all study data from all studies conducted at your firm must be rejected.

thanks for posting the info. These stories justify the existence of the SatoWIB and Buster software that's used to detect exactly the type of manipulation that FDA are worried about. FDA sends a remarkably clear message there.

One good question is what happens now. I would not be surprised if PanExcell and Synchron will cease to exist (might pop up under another name?), sponsors using the two CROs will be forced to abandon some/all of their MA's and/or repeat studies even for EU and WHO, and regulators will more seriously start to consider how statistical (or pharmacokinetic) software screening for indicators of manipulation in their own right should be used routinely within agencies. There may be a little wars going on between assessors and inspectors all the while. E.g. whose job is it to screen and who takes decisions on the basis of the outcome? But Rome wasn't built in a day :-D :-D

Bonus question:

Have a look at the pdf files from FDA. Now answer this question: Which day were the letters issued?

Not written in the file? Well, then how about the meta-info embedded in the pd file? Oh, a date is not there either? Well, then... Do we really need to go looking for the date on the linking FDA page?
It is funny how, in this day and age when ALCOA and so forth is so fashionable, letters like these are not explicitly dated. Is there a reason?


Edit: 2 successive posts merged [Ohlbe]

Pass or fail!
ElMaestro
Ohlbe
★★★

France,
2021-09-17 10:15
(375 d 23:09 ago)

@ ElMaestro
Posting: # 22579
Views: 3,503
 

 Dates

Dear ElMaestro,

» Have a look at the pdf files from FDA. Now answer this question: Which day were the letters issued?

Well, the only place where you can find a date is the name of the PDF file, which for both letters includes "9.15.21". If you were to tell an FDA inspector this is how you date your documents, you would get his autograph on a '483.

By the way, I forgot. Look at the date of the inspection on Synchron's letter. Now look at this announcement on their web site :-D

Regards
Ohlbe
Ohlbe
★★★

France,
2021-09-17 20:28
(375 d 12:57 ago)

@ ElMaestro
Posting: # 22587
Views: 3,435
 

 Some more info

Hi ElMaestro,

» One good question is what happens now. I would not be surprised if PanExcell and Synchron will cease to exist (might pop up under another name?), sponsors using the two CROs will be forced to abandon some/all of their MA's and/or repeat studies

Yes, indeed, in the USA: instructions to companies have now been posted on the FDA web site. Repeat all studies; therapeutic equivalence of already approved products is meanwhile changed to BX. Europe: it should not take them more than a year or two to decide what to do :sleeping:

» Have a look at the pdf files from FDA. Now answer this question: Which day were the letters issued?

» Do we really need to go looking for the date on the linking FDA page?

According to this new page: the letters were sent out on 15 September 2021.

Regards
Ohlbe
Ohlbe
★★★

France,
2022-01-24 17:47
(246 d 14:37 ago)

@ Ohlbe
Posting: # 22754
Views: 2,015
 

 EU Article 31 referral started

Dear all,

» Europe: it should not take them more than a year or two to decide what to do :sleeping:

Mmmm. It looks like things are moving a bit: according to the Agenda of the January CHMP meeting an Article 31 referral is being triggered (see page 32 of the PDF). Will take some more months before the referral is concluded and the Commission decision gets published and implemented.

Malaysia announced their decision almost 2 months ago.


Edit: detailed information on the referral is now available here [Ohlbe].

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-09-17 16:50
(375 d 16:35 ago)

@ Ohlbe
Posting: # 22583
Views: 3,618
 

 Cheating

Hi Ohlbe,

successful cheating is not for lay persons… [image]-script at the end.
The CVs calculated from the confidence intervals in both ‘parts’ are much lower than the ones in the ‘full’ study. Or the other way ’round: If we pool the CVs of the ‘parts’ we could expect values which are lower than the ‘observed’ (tee-hee!) ones.

       CRO  study  n metric     PE  lower  upper   BE    CV CV.pooled
 Panexcell part 1 12   Cmax  71.26  64.78  78.38 fail 12.93         
 Panexcell part 2 12   Cmax 141.80 124.27 161.82 fail 17.98         
 Panexcell   full 24   Cmax 100.52  86.88 116.31 pass
30.08     15.25
 Panexcell part 1 12 AUC0-t  85.41  78.45  93.00 fail 11.53         
 Panexcell part 2 12 AUC0-t 126.89 115.81 139.04 fail 12.40         
 Panexcell   full 24 AUC0-t 104.11  94.97 114.12 pass
18.69     11.96
  Synchron part 1 41   Cmax 135.55 115.53 159.04 fail 44.99         
  Synchron part 2 31   Cmax  73.54  60.53  89.36 fail 47.50         
  Synchron   full 72   Cmax 103.50  90.30 118.62 pass
52.21     46.04
  Synchron part 1 41 AUC0-t 122.98 110.07 137.41 fail 30.47         
  Synchron part 2 31 AUC0-t  75.24  63.33  89.38 fail 41.54         
  Synchron   full 72 AUC0-t  99.01  88.86 110.32 pass
40.46     34.78



# https://www.fda.gov/media/151569/download (page 4)
# https://www.fda.gov/media/151570/download (page 4)
library(PowerTOST)
CRO     <- c("Panexcell", "Synchron")
designs <- c("2x2x2", "2x2x2")
metric  <- c("Cmax", "AUC0-t")
# arbitrary identifiers, only the last one must be "full"
study   <- c("part 1", "part 2", "full")
ns1     <- c(12, 12, 24)
ns2     <- c(41, 31, 72)
PE1     <- c( 71.26, 141.80, 100.52,
              85.41, 126.89, 104.11)
PE2     <- c(135.55,  73.54, 103.5 ,
             122.98,  75.24,  99.01)
lower1  <- c( 64.78, 124.27,  86.88,
              78.45, 115.81,  94.97)
lower2  <- c(115.53,  60.53,  90.3 ,
             110.07,  63.33,  88.86)
upper1  <- c( 78.38, 161.82, 116.31,
              93.00, 139.04, 114.12)
upper2  <- c(159.04,  89.36, 118.62,
             137.41,  89.38, 110.32)
res     <- data.frame(CRO = rep(CRO, each = length(study) * length(metric)),
                      design = rep(designs, each = length(study) * length(metric)),
                      study = rep(study, length(CRO) * length(metric)),
                      n = c(rep(ns1, length(metric)),
                            rep(ns2, length(metric))), df = NA_integer_,
                      metric = rep(rep(metric, each = length(study)), length(CRO)),
                      PE = c(PE1, PE2), lower = c(lower1, lower2),
                      upper = c(upper1, upper2), BE = "fail",
                      CV = NA_real_, CV.pooled = "")
for (j in 1:nrow(res)) {
  if (res$lower[j] >= 80 & res$upper[j] <= 125) res$BE[j] <- "pass"
  # calculate the CV from the CI
  res$CV[j] <- signif(100 * suppressMessages(
                              CI2CV(pe = res$PE[j] / 100,
                                    lower = res$lower[j] / 100,
                                    upper = res$upper[j] / 100,
                                    n = res$n[j])), 4)
  # degrees of freedom as expression
  df <- PowerTOST:::.design.df(PowerTOST:::.design.props(
                               PowerTOST:::.design.no(res$design[j])),
                                                      robust = FALSE)
  n         <- res$n[j]
  res$df[j] <- eval(df) # calculate df from sample size
}
for (j in seq_along(CRO)) {
  for (k in seq_along(metric)) {
    # extract the ‘parts’ (design, df, and CV)
    CVs <- res[res$CRO == CRO[j] & res$study != "full" &
               res$metric == metric[k], c(2, 5, 11)]
    # CV pooled from the ‘parts’
    res$CV.pooled[res$CRO == CRO[j] & res$study == "full" &
                  res$metric == metric[k]] <- signif(CVpooled(CVs)$CV, 4)
  }
}
res <- res[, -which(names(res) %in% c("design", "df"))] # no more needed
print(res, row.names = FALSE)


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2021-09-17 17:56
(375 d 15:29 ago)

@ Helmut
Posting: # 22585
Views: 3,440
 

 Increased variability

Hi Helmut,

» The CVs calculated from the confidence intervals in both ‘parts’ are much lower than the ones in the ‘full’ study. Or the other way ’round: If we pool the CVs of the ‘parts’ we could expect values which are lower than the ‘observed’ (tee-hee!) ones.

Yep. The effect on the MSE is clearly visible on Figure 4 of the paper linked by ElMaestro.

Regards
Ohlbe
ElMaestro
★★★

Denmark,
2021-09-17 18:51
(375 d 14:34 ago)

@ Ohlbe
Posting: # 22586
Views: 3,500
 

 Increased variability

Hi all,

» » The CVs calculated from the confidence intervals in both ‘parts’ are much lower than the ones in the ‘full’ study. Or the other way ’round: If we pool the CVs of the ‘parts’ we could expect values which are lower than the ‘observed’ (tee-hee!) ones.

Correctly observed. And for exactly those reasons it makes very good sense to plot for example the RMSE, CV, SE of diff., or even the width of the CI as function of (cumulated) number of subjects.

It is not so difficult. :-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-09-18 15:26
(374 d 17:58 ago)

@ ElMaestro
Posting: # 22588
Views: 3,333
 

 Visualization

Hi ElMaestro,

» […] it makes very good sense to plot for example the RMSE, CV, SE of diff., or even the width of the CI as function of (cumulated) number of subjects.

If you have all data, of course. A simple visualization using the res data.frame of my previous post:

[image]




windows(width = 5.5, height = 5.5)
op <- par(no.readonly = TRUE)
par(mar = c(3, 4, 2, 0), cex.axis = 0.9, cex.main = 1, font.main = 1)
split.screen(c(length(CRO), length(metric)))
scrn <- 0
for (j in seq_along(CRO)) {
  for (k in seq_along(metric)) {
    scrn <- scrn + 1
    screen(scrn)
    plot(c(0.5, length(study)+0.5), rep(100, 2), type = "n",
         ylim = range(res$lower[res$CRO == CRO[j] & res$metric == metric[k]],
                      res$upper[res$CRO == CRO[j] & res$metric == metric[k]]),
         log = "y", xlab = "", ylab = "PE, 90% CI", axes = FALSE,
         main = paste0(CRO[j], ": ", metric[k]))
    axis(2, las = 1)
    axis(1, at = seq_along(study), tick = FALSE,
         labels = paste("n =", res$n[res$CRO == CRO[j] &
                               res$metric == metric[k]]))
    abline(h = c(80, 100, 125), col = c("red", "black", "red"))
    box()
    for (l in seq_along(study)) {
      arrows(x0 = l,
             y0 = res$lower[res$CRO == CRO[j] & res$metric == metric[k] &
                            res$study == study[l]],
             y1 = res$upper[res$CRO == CRO[j] & res$metric == metric[k] &
                            res$study == study[l]], length = 0.08,
             angle = 90, lwd = 2, code = 3)
      points(l, res$PE[res$CRO == CRO[j] & res$metric == metric[k] &
                       res$study == study[l]], pch = 19, cex = 1.25)
    }
  }
}
close.screen(all = TRUE)
par(op)


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
jag009
★★★

NJ,
2021-09-19 09:18
(374 d 00:07 ago)

@ Ohlbe
Posting: # 22589
Views: 3,293
 

 Another two...

HI Ohlbe

» It looks like the FDA has just keelhauled 2 CROs again:

Good that FDA caught them. Can't wait to see their clients sue their ass off for pulling those kind of stunts.

J
ElMaestro
★★★

Denmark,
2022-05-24 12:00
(126 d 21:25 ago)

@ Ohlbe
Posting: # 23012
Views: 1,423
 

 Another two...

Hi all,

update (kind of) dated 20.5.2022.
Click here.

And this remains a topic no-one will really discuss.

This is the tip of the iceberg.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-24 12:51
(126 d 20:34 ago)

@ ElMaestro
Posting: # 23013
Views: 1,420
 

 Blind monitors or greedy sponsors?

Hi ElMaestro,

also interesting the list of medicines concerned by the procedure. You can read it ‘backwards’, i.e., look at the sponsors. No pre-study vendor audit, sloppy monitoring? Yeah, on-site monitoring generally covers only the clinical part but still…

Once we were there for a monitoring. The analytical part was performed somewhere else and hence, mix-up of samples not an issue. Luckily it was only a pilot study for a couple of new formulations to be used in a line-extension. The reference performed exactly as expected but the others terrible → waste container.
However, during the monitoring we felt uncomfortable. No smoking gun but dried blood on tissue paper in the phlebotomy room (before the study started!), contradicting statements about handling the IMPs, etc.
I recommended my clients to avoid this CRO.

Given that, were monitors of the other sponsors :blind: or just greedy?


Edit: Discovered a goody.

Successful Completion of Surprise Joint Audit by USFDA & WHO – 18 – 22 November 2019

Another proof of our impeccable regulatory track record. We are pleased to announce the successful com­pletion of Surprise Joint Audit by USFDA & WHO. The Audit lasted for 1 week. A Huge Thank You and Con­gra­tu­lations to our wonderful dedicated teams for their commitment and continual focus on quality. Without them this would not have been possible.


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2022-05-24 14:21
(126 d 19:04 ago)

@ Helmut
Posting: # 23014
Views: 1,406
 

 Blind monitors or greedy sponsors?

Dear Helmut,

» also interesting the list of medicines concerned by the procedure. You can read it ‘backwards’, i.e., look at the sponsors.

Well, marketing authorisation holders / applicants were not necessarily the sponsor of the study. Some of them just bought the dossier off the shelf, before or after the marketing authorisation was granted. Due diligence: generally, nil, especially if the marketing authorisation was already granted. EMA organised a workshop with representatives of the generic drug industry some years ago, after the first referrals of this type. Nothing came out of it. Industry's position: "we can't audit the study when we buy the dossier coz' we were not the sponsor". It went nowhere.

» Given that, were monitors of the other sponsors :blind: or just greedy?

Or just incompetent ? Or were some of the sponsors perfectly aware of the scam, and willing to use such CROs precisely because they were sure their study would pass ?

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-24 16:35
(126 d 16:49 ago)

@ Ohlbe
Posting: # 23015
Views: 1,397
 

 Blind monitors or greedy sponsors?

Dear Ohlbe,

» Well, marketing authorisation holders / applicants were not necessarily the sponsor of the study. Some of them just bought the dossier off the shelf, before or after the marketing authorisation was granted.

Good point!

» Due diligence: generally, nil, especially if the marketing authorisation was already granted. EMA organised a workshop with representatives of the generic drug industry some years ago, after the first referrals of this type. Nothing came out of it. Industry's position: "we can't audit the study when we buy the dossier coz' we were not the sponsor". It went nowhere.

I was once hired by the potential buyer of a dossier to assess it. I asked for the language. Spanish. Told them that my knowledge goes hardly beyond ¡Hola! ¿Qué hay? Paella, Rioja, cerveza. Answer: Fair enough, that’s fine. OK, business class flight to Barcelona, locked (!) for hours in a room with twenty binders. Funny.

» » Given that, were monitors of the other sponsors :blind: or just greedy?
» Or just incompetent ?

Another good point. However, it’s not rocket since, right?

» Or were some of the sponsors perfectly aware of the scam, and willing to use such CROs precisely because they were sure their study would pass ?

Possible. Shyte.

Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-05-25 08:11
(126 d 01:13 ago)

@ Helmut
Posting: # 23016
Views: 1,367
 

 Blind monitors or greedy sponsors?

Hi Hötzi, all,

» Given that, were monitors of the other sponsors :blind: or just greedy?

these CROs were accused of practices that hardly anyone can see if monitoring is done in the classical fashion. I have been deeply involved in CROs doing "the switch", and none of them left any trace on source and/or they left no trace on source that would be easy to spot (this consideration includes spreadsheets for those who heard such rumors).

Next, monitors are relatively low on the corporate ladder. They do not decide what they are supposed to look into. They do not write their own SOPs or give much direction. Therefore, don't blame it on monitors. Blind monitors are not an issue, I think, but no monitors are. It isn't unusual that sponsor skip monitoring altogether. Like some of our mutual friends in the US do when using this CRO.

I would much rather point towards something much more basic. There is no guideline that specifically says you have to monitor or when (ICH E6 has a vague wording on this part), and there is no guideline that says what exactly to look for and if you've been working with CRO X for Y years and never had an issue then why would you consider "the switch" a manifest risk that requires activity? Mind you, you already convinced me (really!), but the task is to convince the decision-makers who know little about these operations, but who know how to read legal stuff.

Finally, "the switch" is perhaps much easier to see at a stage when monitoring is done. This is not a monitoring thing, basically.

DD: Well what would you look for? It was not until 2021 that someone actually showed that the switch can be seen with software, but this requires that such software is written and available along with data.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-25 09:19
(126 d 00:06 ago)

@ ElMaestro
Posting: # 23017
Views: 1,413
 

 (Cumulative) T/R-ratio vs. time

Hi ElMaestro,

first of all, I agree with all you wrote. ;-)

» Finally, "the switch" is perhaps much easier to see at a stage when monitoring is done.

It happens in the analytical stage. No monitoring…

» DD: Well what would you look for? It was not until 2021 that someone actually showed that the switch can be seen with software, but this requires that such software is written and available along with data.

Remember GVK Bio back in the day? Doable in a spreadshit.

[image]
ANSM Inspection report 2013

The data are available to the sponsor and potential buyers of the dossier as well.

Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2022-05-25 10:53
(125 d 22:32 ago)

@ Helmut
Posting: # 23018
Views: 1,402
 

 Sponsors and CRO selection

Dear Helmut and ElMaestro,

» Remember GVK Bio back in the day? Doable in a spreadshit.

» The data are available to the sponsor and potential buyers of the dossier as well.

Well, remember GVK ? According to the ANSM inspection report (document A3, page 3), the whistleblower said that "the switch", as ElMaestro calls it, was done only for some sponsors, who paid the head of the bioanalytical lab to do it. The report does not mention any evidence being found to support this allegation, but that's indeed a possibility.

Overall:
  • if the sponsor selected the CRO precisely because it is doing "the switch" (be the decision to select that CRO or to make "the switch" made by the Man in the Penguin Armani Suit or by somebody not willing to bring bad news to this man), well...
  • if not: I agree with ElMaestro that "the switch" is extremely unlikely to be detected on-site during a monitoring visit or study-specific audit, and that looking into the data is most likely to be the only way to detect it. But I also agree with Helmut: CROs doing "the switch" are probably not the best on the market, and there are a number of "warning bells" which should have rung during a proper site selection audit or during monitoring visits. I once talked to a regulatory inspector who told me of inspections where he just wanted to run away after half a day, having just done a tour of the facilities and a couple of discussions with staff, and was wondering how anybody with a sane mind could have contracted such people.

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-25 12:04
(125 d 21:20 ago)

@ Ohlbe
Posting: # 23019
Views: 1,332
 

 I still think that T/R-ratios are useful

Dear Ohlbe,

» Well, remember GVK ? According to the ANSM inspection report (document A3, page 3), …

Didn’t know that the documents are in the public domain in the meantime. ;-)

» … the whistleblower said that "the switch", as ElMaestro calls it, was done only for some sponsors, who paid the head of the bioanalytical lab to do it. The report does not mention any evidence being found to support this allegation, but that's indeed a possibility.

However, as I wrote above it’s a pretty trivial exercise for anybody to critically inspect the data. No need to buy a pig in a poke.

An example from my files: 2×2×2 crossover of a HVD back in the day when reference-scaling was not acceptable, assumed T/R-ratio 95%, CV 40%, targeted at ≥80% power, anticipated dropout-rate 10%. Hence, the large sample size (74 dosed). In the study (71 eligible subjects) the PE was with 96.77% slightly better than expected (90% CI: 89.82–104.26%) and the CV with 27.1% as well.

[image]
Possibly the statistician was fired by the man in the Armani suit shouting
»You’re a bloody looser‼ Why didn’t we perform the study in 16 subjects‽
I calculated post hoc power in FARTSSIE and it’s already 81.6%! What’s the point of having 99.4% at the end?«

(With a sigh, the statistician types cmd rmdir "C:\Users\Genius" /s /q
and tries to become a regulator or an academic…)


» – if the sponsor selected the CRO precisely because it is doing "the switch" (be the decision to select that CRO or to make "the switch" made by the Man in the Penguin Armani Suit or by somebody not willing to bring bad news to this man), well...

Likely he receives annual gratifications cause all studies pass. If sumfink is uncovered, he moves to the next company.
Nowadays being employed by numerous companies is understood as sign of competence and ‘solution based flexibility’.  Boomers  Dinosaurs who stayed for decades in the same company are on their way to retire.

Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2022-05-25 14:40
(125 d 18:45 ago)

@ Helmut
Posting: # 23020
Views: 1,331
 

 T/R-ratios are useful

Dear Helmut,

» » Well, remember GVK ? According to the ANSM inspection report (document A3, page 3), …

» Didn’t know that the documents are in the public domain in the meantime. ;-)

I found them by chance, linked to this paper and that one from the same author. Apparently he made a freedom-of-information request and posted the reports he obtained.

» However, as I wrote above it’s a pretty trivial exercise for anybody to critically inspect the data.

"Trivial" if you have the coding skills or know someone who does, yes, definitely. And yes, it definitely makes sense, even (especially ?) if you are buying a dossier from the shelf.

» Possibly the statistician was fired by the man in the Armani suit shouting
» »You’re a bloody looser‼ Why didn’t we perform the study in 16 subjects‽
» I calculated post hoc power in FARTSSIE and it’s already 81.6%! What’s the point of having 99.4% at the end?«

Come on, the man in the Armani suit has no idea how to use FARTSSIE and has even never heard of it :-D

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-25 15:35
(125 d 17:50 ago)

@ Ohlbe
Posting: # 23021
Views: 1,314
 

 T/R-ratios are useful

Dear Ohlbe,

» » Didn’t know that the documents are in the public domain in the meantime. ;-)
» I found them by chance, linked to this paper and that one from the same author.

THX, interesting.

» » However, as I wrote above it’s a pretty trivial exercise for anybody to critically inspect the data.
» "Trivial" if you have the coding skills or know someone who does, yes, definitely.

Your wish is my command. [image] script at the end. To play with the script, download my data. There is another one, where I multiplied all original values of T up to subject 60 with 0.85 and then by 1.30.

[image]

Is this manipulation really obvious? By chance the T/R-ratios and PEs start to increase with subject 40. Would the original data already trigger an alarm?*

» And yes, it definitely makes sense, even (especially ?) if you are buying a dossier from the shelf.
»
» » Possibly the statistician was fired by the man in the Armani suit shouting
» » I calculated post hoc power in FARTSSIE and it’s already 81.6%! What’s the point of having 99.4% at the end?«
»
» Come on, the man in the Armani suit has no idea how to use FARTSSIE and has even never heard of it :-D

How could I forget? Only professional in PowerPoint, Twitter, WhatsApp, copypasting from one document to another, networking at conferences, smalltalk at cocktail parties. Preferred communication via LinkedIn.



Currently only for 2×2×2 crossover designs. If you are not interested in the cumulative PE and CI, delete everything in red. Then you need only the columns Subject, Treatment, Cmax in the file. If you want a version reading from XLS(X), let me know.

library(PowerTOST)
path <- "your path"
file <- "yourdata.csv"
# required case-sensitive column names (in any order):
#   Subject, Sequence, Treatment, Period, Cmax
#   the treatments must be coded as T and R

sep  <- "," # column separator: inspect the file; alternatively "\t" or ";"
dec  <- "." # decimal separator; alternatively ","
com  <- "#" # for eventual comments, sometimes found in the header
na   <- c("NA", "ND", ".", "", "Missing") # will be forced to NA
if (sep == dec) stop("sep and dec must be different!")
data <- read.csv(file = paste0(path, "/", file), sep = sep, dec = dec, quote = "",
                 strip.white = TRUE, comment.char = com, na.strings = na)
# if there are other columns in the file, drop them
keep <- c("Subject", "Sequence", "Treatment", "Period", "Cmax")
data <- data[, keep]
subjects <- unique(data$Subject)
n        <- length(subjects)
Cmax.ratio <- data$Cmax[data$Treatment == "T"] / data$Cmax[data$Treatment == "R"]
Cmax.cum   <- numeric(n)
for (i in 1:n) {
  Cmax.cum[i] <- prod(Cmax.ratio[1:i])^(1/i)
}
res  <- data.frame(PE = rep(NA_real_, n-1), lower = NA_real_,
                   upper = NA_real_, CV = NA_real_, power = NA_real_)
# factorize for the linear model
facs <- c("Subject", "Sequence", "Treatment", "Period")
data[facs] <- lapply(data[facs], factor)
ow   <- options("digits")
options(digits = 12) # more digits for anova
on.exit(ow)          # ensure that options are reset if an error occurs
for (i in 3:n) {
  tmp            <- head(data, i * 2)
  m              <- lm(log(Cmax) ~ Subject + Period + Sequence + Treatment,
                                   data = tmp)
  res$PE[i-1]    <- exp(coef(m)[["TreatmentT"]])
  res[i-1, 2:3]  <- as.numeric(exp(confint(m, "TreatmentT",
                                           level = 1 - 2 * 0.05)))
  res$CV[i-1]    <-mse2CV(anova(m)[["Residuals", "Mean Sq"]])
  res$power[i-1] <- suppressMessages(
                      power.TOST(CV = res$CV[i-1], theta0 = res$PE[i-1],
                                 n = length(unique(tmp$Subject))))
}
# print(round(100 * res, 2)) # uncomment this line for PE, CI, CV, post hoc power

dev.new(width = 4.5 * sqrt(2), height = 4.5)
op   <- par(no.readonly = TRUE)
par(mar = c(3.2, 3.1, 0, 0.1), cex.axis = 0.9,
    mgp = c(2, 0.5, 0), ljoin = "mitre", lend = "square")
plot(1:n, Cmax.ratio, type = "n", log = "y", axes = FALSE, xlab = "analysis",
     ylab = expression(italic(C)[max[T]]*" / "*italic(C)[max[R]]))
x.tick <- pretty(1:n)
x.tick <- c(1, x.tick[2:(length(x.tick)-2)], n)
axis(1, at = x.tick, labels = paste0("#", x.tick))
y.tick <- sort(c(0.8, 1.25, axTicks(side = 2, log = TRUE)))
axis(2, at = y.tick, las = 1)
grid(nx = NA, ny = NULL)
abline(v = x.tick, lty = 3, col = "lightgrey")
abline(h = c(0.8, 1, 1.25), lty = c(2, 1, 2))
box()
legend("topleft", legend = c("individual T/R-ratio", "cumulative T/R-ratio",
                             "point estimate", "confidence limits"
),
       pch = c(21, NA, NA, NA), lty = c(0, 1, 1, 1), lwd = c(1, 2, 2),
       col = c(rep("blue", 2), "darkred", "red"), pt.bg = "#AAAAFF80",
       pt.cex = 1, bg = "white", seg.len = 3, cex = 0.9, y.intersp = 1.1)
lines(1:n, Cmax.cum, col = "blue", lwd = 2, type = "s")
lines(2:n, res$PE, col = "darkred", lwd = 2, type = "s")
lines(2:n, res$lower, col = "red", type = "s")
lines(2:n, res$upper, col = "red", type = "s")

points(1:n, Cmax.ratio, pch = 21, col = "blue", bg = "#AAAAFF80")
par(op)


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-26 15:32
(124 d 17:53 ago)

@ Ohlbe
Posting: # 23023
Views: 1,238
 

 Bust the Buster

Dear Ohlbe,

I tried to reproduce the ‘Buster routines’ of the paper mentioned above. As usual the [image] script at the end. Sorry, 200+ lines of code.

This is what I got for my manipulated data set:

A
[image]
Like the paper’s Fig. 1.
We see that for the Test the deviations from the mean increase at the end.
However, statistically the trend is not significant.


B
[image]
Like the paper’s Fig. 2.
For the reference all is good. As expected cause we use the original data.


C
[image]
OK – but the ratios start to rise (by chance) before the manipulation starts with subject 60.
Not convinced whether we can eye-ball it.


D
[image]
Like the paper’s Fig. 3.
Possibly better. The original study was properly powered and by chance would pass with fewer subjects.
After the manipulation it needs almost the complete data set to pass.


E
[image]
Like the paper’s Fig. 5.
Like in A we see rising residuals at the end. However, statistically the trend is not significant.


F
[image]
Like the paper’s Fig. 4.
No clue.


G
[image]
Only for completeness.

For AC we need only the values of T and R and the subjects (more precisely: in the order of analyses). For the others additionally the randomization.
Potentially A, C, D (and E?) are useful. I have some doubts whether a ‘mild’ manipulation (:-D) like in my example can reliably be detected. You can try yet another data set where I multiplied T in the first half by 0.8 and in the second half by 1.25. Then the manipulation is obvious in C, D, and E (trend still not significant; p = 0.1677).

Unfortunately the data set BUa627311.csv is not accessible at doi:10.1016/j.ejps.2020.105595. I suspect that the manipulation was extreme.


library(PowerTOST) # Lazy: convert MSE to CV and back
library(randtests) # Provides Wald-Wolfowitz runs test
path <- "your.path"
file <- "yourdata.csv"
url  <- NULL # alternatively a valid url enclosed in single of double quotes
# url <- "https://bebac.at/downloads/Post23021a.csv" # My original data
# url <- "https://bebac.at/downloads/Post23021b.csv" # Manipulated data
# Required case-sensitive column names (in any order):
#   Subject, Sequence, Treatment, Period, Cmax

sep  <- "," # Column separator: Inspect the file; alternatively "\t" or ";"
dec  <- "." # Decimal separator; alternatively ","
if (sep == dec) stop("sep and dec must must different!")
com  <- "#" # For eventual comments, sometimes found in the header
na   <- c("NA", "ND", ".", "", "Missing") # Will be set to NA
if (sep == dec) stop("sep and dec must must different!")
if (is.null(url)) { # Read local file
  if (!substr(path, nchar(path), nchar(path)) == "/")
    path <- paste0(path, "/") # Add trailing forward slash even on Windows
  data <- read.csv(file = paste0(path, file), sep = sep, dec = dec,
                   quote = "", strip.white = TRUE, comment.char = com,
                   na.strings = na)
} else {
  data <- read.csv(file = url, sep = sep, dec = dec, quote = "",
                   strip.white = TRUE, comment.char = com, na.strings = na)
}
subjects <- unique(data$Subject)
n        <- length(subjects)
# If there are other columns in the file, drop them
modeling <- FALSE
modeling <- c("Subject", "Sequence", "Treatment", "Period", "Cmax")
if (sum(names(data) %in% modeling) < 5) stop("Check column names!")
data     <- data[, modeling]
modeling <- TRUE
if (!modeling) {
  keep <- c("Subject", "Treatment", "Cmax")
  if (sum(names(data) %in% keep) < 3) stop("Check column names!")
  data <- data[, keep]
}

# Difference of Cmax from mean (both in log-scale)
delta.T <- log(data$Cmax[data$Treatment == "T"]) -
           mean(log(data$Cmax[data$Treatment == "T"]))
delta.R <- log(data$Cmax[data$Treatment == "R"]) -
           mean(log(data$Cmax[data$Treatment == "R"]))

# Wald-Wolfowitz runs test of randomness for continuous data
# alternative = "left.sided" tests for a trend

rt.T    <- runs.test(delta.T, alternative = "left.sided")
rt.T    <- c(paste(rt.T$runs, "runs"),
             bquote(italic(p) == . (sprintf("%.4f", rt.T$p.value))))
rt.R    <- runs.test(delta.R, alternative = "left.sided")
rt.R    <- c(paste(rt.R$runs, "runs"),
             bquote(italic(p) == . (sprintf("%.4f", rt.R$p.value))))

# T/R-ratios: individual and cumulative with time
Cmax.ratio <- data$Cmax[data$Treatment == "T"] /
              data$Cmax[data$Treatment == "R"]
Cmax.cum   <- numeric(n)
for (i in 1:n) {
  Cmax.cum[i] <- prod(Cmax.ratio[1:i])^(1/i)
}

if (modeling) { # Only if we have Sequence and Period in the data
  res  <- data.frame(subjects = NA_integer_, PE = rep(NA_real_, n-1),
                     lower = NA_real_, upper = NA_real_, MSE = NA_real_,
                     CV = NA_real_, power = NA_real_)
  # Factorize columns (effects) for the linear model
  facs       <- c("Subject", "Sequence", "Treatment", "Period")
  data[facs] <- lapply(data[facs], factor)
  # Find the first occasion where we have at least one subject
  # in both sequences (otherwise, the model would no work)

  seqs     <- character()
  for (subj in seq_along(subjects)) {
    seqs <- c(seqs, unique(
                      as.character(data$Sequence[data$Subject ==
                                   subjects[subj]])))
    if (length(unique(seqs)) > 1) break
  }
  ow       <- options("digits")
  options(digits = 12) # more digits for anova
  on.exit(ow)          # ensure that options are reset if an error occurs
  for (i in subj:n) {
    tmp               <- head(data, i * 2)
    res$subjects[i-1] <- length(unique(tmp$Subject))
    # Acc. to GLs: Nested model is bogus; Subject works as well
    m                 <- lm(log(Cmax) ~ Sequence + Subject %in% Sequence +
                                        Period + Treatment, data = tmp)
    res$PE[i-1]       <- exp(coef(m)[["TreatmentT"]])
    res[i-1, 3:4]     <- as.numeric(exp(confint(m, "TreatmentT",
                                                level = 1 - 2 * 0.05)))
    res$MSE[i-1]      <- anova(m)[["Residuals", "Mean Sq"]]
    res$CV[i-1]       <- mse2CV(res$MSE[i-1])
    res$power[i-1]    <- suppressMessages(
                           power.TOST(CV = res$CV[i-1], theta0 = res$PE[i-1],
                                      n = length(unique(tmp$Subject))))
  }
  res <- res[complete.cases(res), ]

  # Raw residuals of T
  pred     <- suppressWarnings(
                exp(predict(m, newdata = data[data$Treatment == "T", 1:4])))
  # Acc. to the paper. Commonly: residual = predicted – observed
  resid    <- data[data$Treatment == "T", "Cmax"] - pred
  rt.resid <- runs.test(resid, alternative = "left.sided")
  rt.resid <- c(paste(rt.resid$runs, "runs"),
                bquote(italic(p) == . (sprintf("%.4f", rt.resid$p.value))))
  # Studentized residuals
  stud <- rstudent(m)
  # Two values with opposite signs; we need only one of them
  stud <- stud[c(TRUE, FALSE)]
  # Uncomment the next line to show PE, CI, MSE, CV, post hoc power
  # print(round(res, 5))
}

# Prepare plots
x.tick <- pretty(1:n)
x.tick <- c(1, x.tick[2:(length(x.tick)-2)], n)
dev.new(width = 4.5 * sqrt(2), height = 4.5, record = TRUE)
op      <- par(no.readonly = TRUE)
par(mar = c(3.2, 3.3, 0.1, 0.1), cex.axis = 0.9,
    mgp = c(2, 0.5, 0), ljoin = "mitre", lend = "square", ask = TRUE)

###############################################
# Difference of log(Cmax) from geometric mean #
###############################################
# ylim common to both treatments

ylim    <- c(-1, +1) * max(abs(range(c(delta.T, delta.R))))
y.ax    <- sprintf("%+.1f", pretty(ylim))
y.ax[y.ax == "+0.0"] <- "±0.0"
########
# Test #
########

bp      <- barplot(delta.T, plot = FALSE)
plot(bp, bp, ylim = ylim, type = "n", axes = FALSE, frame.plot = TRUE,
     xlab = "Subject", ylab = expression(log(italic(C)[max[T]])*" – mean [ "*
                                         log(italic(C)[max[T]])*" ]"))
barplot(delta.T, ylim = ylim, las = 1, col = "lightskyblue", border = NA,
        axes = FALSE, axisnames = FALSE, ylab = "", add = TRUE)
axis(1, at = bp[x.tick], labels = x.tick)
axis(2, at = pretty(ylim), labels = y.ax, las = 1)
legend("top", legend = rt.T, inset = 0.02, , box.lty = 0, cex = 0.9,
       x.intersp = 0)
#############
# Reference #
#############

bp      <- barplot(delta.R, plot = FALSE)
plot(bp, bp, ylim = ylim, type = "n", axes = FALSE, frame.plot = TRUE,
     xlab = "Subject", ylab = expression(log(italic(C)[max[R]])*" – mean [ "*
                                         log(italic(C)[max[R]])*" ]"))
barplot(delta.R, ylim = ylim, las = 1, col = "lightskyblue", border = NA,
        axes = FALSE, axisnames = FALSE, ylab = "", add = TRUE)
axis(1, at = bp[x.tick], labels = x.tick)
axis(2, at = pretty(ylim), labels = y.ax, las = 1)
legend("top", legend = rt.R, inset = 0.02, , box.lty = 0, cex = 0.9,
       x.intersp = 0)

#########################
# Individual T/R-ratios #
#########################

plot(1:n, Cmax.ratio, type = "n", log = "y", axes = FALSE, xlab = "Subject",
     ylab = expression(italic(C)[max[T]]*" / "*italic(C)[max[R]]))
y.tick <- sort(c(0.8, 1.25, axTicks(side = 2, log = TRUE)))
axis(1, at = x.tick)
axis(2, at = y.tick, las = 1)
grid(nx = NA, ny = NULL)
abline(v = x.tick, lty = 3, col = "lightgrey")
abline(h = c(0.8, 1, 1.25), lty = c(2, 1, 2))
legend("topleft", legend = c("Individual T/R-ratio", "Cumulative T/R-ratio"),
       pch = c(21, NA), lty = c(0, 1), lwd = c(1, 2), inset = 0.02,
       col = c(rep("blue", 2)), pt.bg = "#AAAAFF80", box.lty = 0,
       pt.cex = 1, bg = "white", seg.len = 2.5, cex = 0.9, y.intersp = 1.25)
box()
lines(1:n, Cmax.cum, col = "blue", lwd = 2, type = "s")
points(1:n, Cmax.ratio, pch = 21, col = "blue", bg = "#AAAAFF80")

if (modeling) {
  ################
  # PE and 90 CI #
  ################

  ylim <- range(c(res$lower, res$upper), na.rm = TRUE)
  plot(1:n, Cmax.ratio, type = "n", ylim = ylim, log = "y", axes = FALSE,
       xlab = "Subject", ylab = "Point Estimate and 90% Confidence Interval")
  axis(1, at = x.tick)
  axis(2, at = y.tick, las = 1)
  grid(nx = NA, ny = NULL)
  abline(v = x.tick, lty = 3, col = "lightgrey")
  abline(h = c(0.8, 1, 1.25), lty = c(2, 1, 2))
  legend("bottomright", legend = c("Point Estimate", "Confidence Limits"),
         lty = c(1, 1), lwd = c(2, 1), bg = "white", box.lty = 0, inset = 0.02,
         col = rep("blue", 2), seg.len = 2.5, cex = 0.9, y.intersp = 1.25)
  box()
  lines(res$subject, res$PE, col = "blue", lwd = 2, type = "s")
  lines(res$subject, res$lower, col = "blue", type = "s")
  lines(res$subject, res$upper, col = "blue", type = "s")

  #######
  # MSE #
  #######

  plot(res$subject, res$MSE, type = "n", xlim = c(1, n), axes = FALSE,
       xlab = "Subject", ylab = expression(italic(MSE)))
  axis(1, at = x.tick)
  axis(2, las = 1)
  grid(nx = NA, ny = NULL)
  abline(v = x.tick, lty = 3, col = "lightgrey")
  box()
  lines(res$subject, res$MSE, col = "blue", lwd = 2, type = "s")

  #############################
  # Raw model residuals for T #
  #############################

  ylim <- c(-1, +1) * max(abs(range(resid)))
  y.ax <- sprintf("%+.1f", pretty(ylim))
  y.ax[y.ax == "+0.0"] <- "±0.0"
  bp   <- barplot(resid, plot = FALSE)
  plot(bp, resid, ylim = ylim, type = "n", axes = FALSE, frame.plot = TRUE,
       xlab = "Subject", ylab = "Raw model residual (T)")
  barplot(resid, ylim = ylim, col = "lightskyblue", border = NA,
          axes = FALSE, axisnames = FALSE, ylab = "", add = TRUE)
  axis(1, at = bp[x.tick], labels = x.tick)
  axis(2, at = pretty(ylim), labels = y.ax, las = 1)
  legend("top", legend = rt.resid, inset = 0.02, , box.lty = 0, cex = 0.9,
         x.intersp = 0)

  ###############################
  # Studentized model residuals #
  ###############################

  ylim <- c(-1, +1) * max(abs(range(stud)))
  y.ax <- sprintf("%+.1f", pretty(ylim))
  y.ax[y.ax == "+0.0"] <- "±0.0"
  bp   <- barplot(stud, plot = FALSE)
  plot(bp, stud, ylim = ylim, type = "n", axes = FALSE, frame.plot = TRUE,
       xlab = "Subject", ylab = "Studentized model residual")
  abline(h = c(-1, +1) * qnorm(0.975), lty = 3, col = "lightgrey")
  barplot(stud, ylim = ylim, col = "lightskyblue", border = NA,
          axes = FALSE, axisnames = FALSE, ylab = "", add = TRUE)
  axis(1, at = bp[x.tick], labels = x.tick)
  axis(2, at = pretty(ylim), labels = y.ax, las = 1)
}
par(op)


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2022-05-26 18:28
(124 d 14:57 ago)

@ Helmut
Posting: # 23024
Views: 1,117
 

 complicate the assessor's life

Dear Helmut,

well, 'smart' analytical labs with intentions to correct the results based on the intermittent analysis will change the order of subjects in analysis. :-D
The unveiling is still possible, but will require some boring process of order extraction from analytical docs

Kind regards,
Mittyri
PharmCat
★    

Russia,
2022-05-27 09:57
(123 d 23:28 ago)

@ mittyri
Posting: # 23026
Views: 1,087
 

 complicate the assessor's life

» well, 'smart' analytical labs with intentions to correct the results based on the intermittent analysis will change the order of subjects in analysis. :-D

Hi all!

All these statistical techniques in helpful, but not evidence. It can be a cause for audit. Also, these occasions became possible because the lab didn't expect this check.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-27 10:11
(123 d 23:14 ago)

@ mittyri
Posting: # 23027
Views: 1,064
 

 complicate the assessor's life

Hi Mittyri,

» well, 'smart' analytical labs with intentions to correct the results based on the intermittent analysis will change the order of subjects in analysis. :-D

Agree. Nevertheless, the analytical report should contain information at which dates which subject was analyzed.

» The unveiling is still possible, but will require some boring process of order extraction from analytical docs

If one suspects that the report is fabricated, one has to resort to the raw data. :-(

Since the runs test was not useful, I tried the Durbin-Watson test for autocorrelated errors (available in the car package). A dead end as well, even for my extremely manipulated third data set. However, testing the slope of a simple regression looks promising. Here for my second data set:

[image]

[image]

[image]
Deviating from the paper I decided to use residual = predicted – observed.
That’s more common.

For the original (real world) data set none of the slopes was significant (0.7252, 0.8362, and 0.8850). For my third data set I got p-values of 5.233·10–4, 1.161·10–4, and 1.699·10–5, respectively. We know that regression is sensitive to extreme values (see this post for a famous example). Here it is a desirable property.

This one is telling:

[image]


Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
sameep
☆    

India,
2022-05-31 08:34
(120 d 00:51 ago)

@ Helmut
Posting: # 23031
Views: 942
 

 Thanks for Busting the Buster

Dear Helmut,

Thanks for sharing the codes.

» If you want a version reading from XLS(X), let me know.

» However, testing the slope of a simple regression looks promising.

Could you please share the XLSX version of the final code?

Regards,
Sameep.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-31 15:22
(119 d 18:02 ago)

@ sameep
Posting: # 23033
Views: 857
 

 Thanks for Busting the Buster

Hi Sameep,

» Could you please share the XLSX version of the final code?

That’s a misunderstanding. The [image] script contains stuff which I will not even try to port to Excel (Wald–Wolfowitz runs test, Durbin-Watson test, linear models for T vs time, R vs time, T/R vs time, BE vs time, model residual vs time). I was only talking about importing the data from xls(x). Not my top priority. In the meantime save your data in CSV-format.

I’m still working on the script (in the meantime a monster with 496 lines of code), allowing not only 2×2×2 but also higher-order crossovers. Not fail-safe.
Maybe (no promises) I will post it later.

Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
sameep
☆    

India,
2022-06-01 13:17
(118 d 20:07 ago)

@ Helmut
Posting: # 23037
Views: 826
 

 A small point for the code

Dear Helmut,

» for (i in 3:n)

Works for all crossover datasets, whereas

» for (i in subj:n) {

Works only if the sequence for the first two subjects is the same. If the first two subject's sequences are different (i.e. TR and RT), then it gives the below error, as calculations are not feasible due to zero degree of freedom.

Error in power.TOST(CV = res$CV[i - 1], theta0 = res$PE[i - 1], n = length(unique(tmp$Subject))) :
  n too small. Degrees of freedom <1!
In addition: Warning messages:
1: In qt(a, object$df.residual) : NaNs produced
2: In anova.lm(m) :
  ANOVA F-tests on an essentially perfect fit are unreliable


» I was only talking about importing the data from xls(x).

This is what I meant.

» I’m still working on the script (in the meantime a monster with 525 lines of code), allowing not only 2×2×2 but also higher-order crossovers. Not fail-safe.

Thanks in advance, looking forward to the goodie.

Regards,
Sameep.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-06-01 14:11
(118 d 19:14 ago)

@ sameep
Posting: # 23038
Views: 881
 

 A small point for the code

Hi Sameep,

corrected in the meantime; use:
    nseq       <- length(unique(data$Sequence))
    seqs       <- character()
    for (subj in seq_along(subjects)) {
      seqs <- c(seqs, unique(
                        as.character(data$Sequence[data$Subject ==
                                     subjects[subj]])))
      if (length(unique(seqs)) >= nseq) break
    }
    ow         <- options("digits") # Save original options
    options(digits = 12) # More digits for anova
    on.exit(ow)          # Ensure that options are reset if an error occurs
    for (i in subj:n) {
      tmp               <- head(data, i * 2)
      res$subjects[i-1] <- length(unique(tmp$Subject))
      # Acc. to GLs: Nested model is bogus; Subject works as well
      m                 <- lm(log(Cmax) ~ Sequence + Subject %in% Sequence +
                                          Period + Treatment, data = tmp)
      res$PE[i-1]       <- exp(coef(m)[["TreatmentT"]])
      if (res$subjects[i-1] > nseq) {
        res[i-1, 3:4]  <- as.numeric(exp(confint(m, "TreatmentT",
                                                 level = 1 - 2 * alpha)))
        res$MSE[i-1]   <- anova(m)[["Residuals", "Mean Sq"]]

        res$CV[i-1]    <- 100 * mse2CV(res$MSE[i-1])
        res$power[i-1] <- suppressMessages( # if unbalanced
                            power.TOST(CV = res$CV[i-1] / 100,
                                       theta0 = res$PE[i-1],
                                       n = length(unique(tmp$Subject))))
        if (res$lower[i-1] >= theta1 & res$upper[i-1] <= theta2)
          res$BE[i-1] <- TRUE
      }
    }
    res            <- res[complete.cases(res), ] # Remove NA

Dif-tor heh smusma 🖖 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,391 posts in 4,685 threads, 1,595 registered users;
online 4 (0 registered, 4 guests [including 2 identified bots]).
Forum time: Wednesday 09:25 CEST (Europe/Vienna)

The real struggle is not between the right and the left
but between the party of the thoughtful
and the party of the jerks.    Jimmy Wales

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5