Jamesmartinn
☆    

2011-11-26 21:22
(4903 d 00:00 ago)

Posting: # 7728
Views: 9,934
 

 Sample Size and CV for Replicate Design [Power / Sample Size]

Hi!

I'm completely new to the area of BE/BA studies.

My question concerns performing a sample size calculations for a cross over replicate study.

In particular,

I'm told that the intra-subject variability, expressed as %CV of a measured response is expected to be 35%. I'm asked to find the sample size in order to have 80% confidence that the estimated CV% is larger than 30%. Assume I'm planning to do a 2x4 Crossover, Replicate design. That is, I will be looking at TRTR and RTRT.

I'm completely lost on how to approach the question. Any direction or guidance in answering this particular question will be much appreciated. I have access to SAS and R. Please advise.
ElMaestro
★★★

Denmark,
2011-11-26 23:38
(4902 d 21:43 ago)

@ Jamesmartinn
Posting: # 7729
Views: 8,915
 

 Sample Size and CV for Replicate Design

Hi JamesMartinn,

❝ I'm told that the intra-subject variability, expressed as %CV of a measured response is expected to be 35%. I'm asked to find the sample size in order to have 80% confidence that the estimated CV% is larger than 30%. Assume I'm planning to do a 2x4 Crossover, Replicate design. That is, I will be looking at TRTR and RTRT.


❝ I'm completely lost on how to approach the question. Any direction or guidance in answering this particular question will be much appreciated. I have access to SAS and R. Please advise.


You should download Detlew Labes' R package called PowerTOST.
It will allow you to calculate the sample size (number of subjects) needed to achieve a given power under a set of assumptions and given a design. You have a 2-sequence, 4 period, 2 treatment design, and that's fairly common and easily handled by this package. You do not need to understand all the stats in order to use that tool.

This forum is a great resource. If you use the search function you will probably find discussions about relevant or related issues.

Good luck.
EM.
Jamesmartinn
☆    

2011-11-27 00:52
(4902 d 20:30 ago)

@ ElMaestro
Posting: # 7730
Views: 8,917
 

 Sample Size and CV for Replicate Design

Dear EM,

Thank you for your reply. I downloaded the package and indeed I see the 2x2x4 design is supported. However, given the information I have available and my ultimate goal, I'm not sure how to specify what I need. I see targetpower=0.80 and alpha=0.05, as well as a few other arguments such as CV and theta. Do I have enough information in what I posted in the initial post to solve my problem using PowerTOST? I should mention I've never used the CV metric before, and I don't see an argument to specify both the 0.35 and 0.30 numbers. Are these values some how related to the 'theta' values that I'm also asked to specify?

Thank you so much.


Edit: Full quote removed. Please delete anything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut]
ElMaestro
★★★

Denmark,
2011-11-27 01:43
(4902 d 19:39 ago)

@ Jamesmartinn
Posting: # 7731
Views: 8,919
 

 Sample Size and CV for Replicate Design

Hi J.,

❝ I downloaded the package and indeed I see the 2x2x4 design is supported. However, given the information I have available and my ultimate goal, I'm not sure how to specify what I need. I see targetpower=0.80 and alpha=0.05, as well as a few other arguments such as CV and theta. Do I have enough information in what I posted in the initial post to solve my problem using PowerTOST? I should mention I've never used the CV metric before, and I don't see an argument to specify both the 0.35 and 0.30 numbers. Are these values some how related to the 'theta' values that I'm also asked to specify?


OK,
in a way it sounds like you are talking about confidence limits for the variability now and not for the point estimate. If that is the case then you are entering a very complicated territory where I have little to offer.
Generally, most people seek to calculate the sample size that assures a power of X, given a CV of Y and a point estimate (theta, test:reference ratio) of Z.

When I re-read your original post, it almost sounds like it is a goal in itself to keep the CV above 30%. For ref-scaling purposes? I must admit it sounds wrong to me to aim for an observed CV above a certain limit, so I probably misunderstood something?!

Could you perhaps reformulate (I hate that word!) your assignment or put more words to it?

Many thanks.

Pass or fail!
ElMaestro
Jamesmartinn
☆    

2011-11-27 01:50
(4902 d 19:31 ago)

@ ElMaestro
Posting: # 7732
Views: 8,965
 

 Sample Size and CV for Replicate Design

Dear EM,

I'm a recent graduate of a Masters level bio-statistics program. Sadly, We didn't have a course in phase 1 / BE-BA clinical trials so I'm not very knowledgeable about them. In fact, over the last few days, I've come to read into them and realize it is a very niche area indeed.

I've applied for a job that conducts trials in BA-BE/Phase 1 trials. As part of the process, I was required to answer 3 questions. The first I answered by research. The second was given to me, as is, verbatim, as follows:

The intra-subject variability, expressed as CV% (coefficient of variation) of a measured response is expected to be 35%. The intra-subject variability is defined as the variability of the response within the same subject from one occasion to another.

The intention is to design a study as a 2-way, crossover, replicate administration of the same treatment to estimate the intra-subject variability. What will be the appropriate sample size (number of subjects) in order to have at least 80% confidence that the estimated CV% is larger than 30%?


I'm starting to think that there is not enough information to answer the question, as given.

For the third question (if you're curious), i'm also stuck on, is given as follows;

The results of a clinical trial are:
Treatment     n   Mean   SD
A            31   2.52  1.36
B            30   2.13  1.30
C (placebo)  16   0.69  1.01


What was the power of the study to detect a 20% difference between treatments?


At this point, I don't think I'm knowledgable enough to work in the position (unless they are willing to train me in it, they haven't said much in this regard, and they have an idea of my transcript/background).


Best
ElMaestro
★★★

Denmark,
2011-11-27 02:24
(4902 d 18:57 ago)

@ Jamesmartinn
Posting: # 7733
Views: 9,049
 

 Sample Size and CV for Replicate Design

Hi J,

shocking.
Anyways, I feel I can't help much.
In principle question 2 does not necessarily mean a 4,4,2-design. It could -from the wording- be a two-period, one-treatment design, right?

Pass or fail!
ElMaestro
Jamesmartinn
☆    

2011-11-27 02:50
(4902 d 18:32 ago)

@ ElMaestro
Posting: # 7734
Views: 8,918
 

 Sample Size and CV for Replicate Design

Hi Em,

Thank you so much for your efforts, I really appreciate it. I'm going to take it upon myself to read up on BA-BE trials, now that I have free time. Over the last few days in trying to answer these questions, I've come across several texts. Would you mind recommending one for someone who knows absolutely nothing about the area? For example, defining what Cmax and AUC are and where they come from?


Best,
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-27 05:47
(4902 d 15:34 ago)

@ Jamesmartinn
Posting: # 7735
Views: 9,022
 

 Strange questions…

Hi Jamesmartinn!

❝ […] I'm going to take it upon myself to read up on BA-BE trials, now that I have free time.


Does that mean you didn’t get the job? Maybe not the worst which could happen with a boss asking such questions. Let’s have a look:

❝ The intra-subject variability, expressed as CV% (coefficient of variation) of a measured response is expected to be 35%.


So far, so good. It should be noted that in a conventional 2×2×2 (2 sequence, 2 period, 2 treatment design or TR|RT) CVintra (aka CVW) is actually pooled from CVWR and CVWT – the within (=intra) subject variabilities of the Test and Reference treatments. We need a replicate design (like the one you mentioned in your first post: TRTR|RTRT) in order to estimate them.

❝ The intra-subject variability is defined as the variability of the response within the same subject from one occasion to another.


Almost. Correct only if the same treatment is administered. Since in a 2×2×2 we cannot estimate CVWR and CVWT separately, in the common model we have to assume that they are identical.

❝ The intention is to design a study as a 2-way, crossover, replicate administration of the same treatment to estimate the intra-subject variability.


Oops. This disqualifies your (still potential?) boss. If we administer only the same treatment twice, it’s not a cross-over (I was trapped myself). Although scientifically valid (e.g., RR as ElMaestro assumed above) regulators do not accept such an approach as a ‘proof’ of high variability (USA & Canada: 30%, EU: >30%). The only reason we want to show high variability is that the conventional acceptance range for bioequivalence (80–125%) may be widened (or more technical: scaled to s²WR). Regulators want to see CV>30% demonstrated in the same study (Reference and Test); CV>30% in another study is irrelevant. Why plan a study which will not be accepted?

❝ What will be the appropriate sample size (number of subjects) in order to have at least 80% confidence that the estimated CV% is larger than 30%?


This is weird. Chow & Liu (Chapter 7.3.2; see below) give a method based on χ² – but only for cross-overs. If we assume CV 35% (s²=ln[0.35²+1]=0.11556) and use df=n-1 instead of n1+n2-2:
  n  ν   SSE=s²·ν  χ²0.8,ν   SSE/χ²   CLlower
————————————————————————————————————————————
 10   9  1.0400    12.242    0.08495  29.78%
 11  10  1.1556    13.442    0.08597  29.96%
 12  11  1.2711    14.631    0.08688  30.13%
bingo?
 13  12  1.3867    15.812    0.08770  30.28%
 14  13  1.5023    16.985    0.08845  30.41%

Take it with a pinch of salt – maybe it’s total crap!

❝ The results of a clinical trial are:

:blahblah:

❝ What was the power of the study to detect a 20% difference between treatments?[


What was the study’s target: A vs. placebo, B vs. placebo, A vs. B (including ‘internal validation’, i.e., superiority of A and B against placebo)? As an exercise you can get the weighted variances and calculate the pooled (=total) variance, but IMHO that’s the end of the story. For a power calculation you would need both α and Δ. BTW, retrospective (aka a posteriori, post hoc) power is futile ([image] search the forum) and should go straight to the statistical garbage bin. :cool:

❝ Would you mind recommending one for someone who knows absolutely nothing about the area? For example, defining what Cmax and AUC are and where they come from?


See this post. I would start with Hauschke et al. (2007) and continue with the 3rd ed. of Chow & Liu (2009). For some background I recommend David Bourne’s online course on Biopharmaceutics and Pharmacokinetics.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Jamesmartinn
☆    

2011-11-27 15:43
(4902 d 05:38 ago)

@ Helmut
Posting: # 7736
Views: 8,806
 

 Strange questions…

Hi HS,

Thank you for taking the time to reply! I was given this take home test on Wed of last week and asked to submit my solutions Friday. As I stated earlier, I could only solve one of the questions and emailed them back saying I was unsuccessful in solving the remaining ones. They replied asking me to take the weekend to try once more - the interview went very well and they seemed to like me. I have to submit something tomorrow.

I'm really starting to think that these questions were vague on purpose? I'm wondering if they want me to respond back by saying additional information is required (for example, alpha = 0.05, or 0.01? for these questions?) Then again I'm not sure, I'm new to BE/BA and it seems like lots of conventions/standards (FDA, etc) already exist, so maybe they expect I should know. It's a real toss up given I'm a total novice to this area!

In regards to the CV study, I'm going to try to locate the Chow and Liu source, it seems my best bet. For the last question, as you stated, I have no idea what the goal of the study was. What I posted in green text was literally what I got. Given that limited information (2 treatments, 1 Placebo), what would be the most likely purpose (superiority, inferiority, equivalence) ? I'm guessing which ever it is, it's in reference to the placebo group.

I'm going to do a few scenarios for that question. However, the whole '20% difference between treatments' as an effect size really throws me off. Could you demonstrate how this is taken into account in calculating 1 or 2 different scenarios? Let's pretend I'm comparing (i) treatment a vs placebo , (ii) treatment b vs placebo for either inferiority/superiority

Thank you so much, you are all very kind in being understanding to my situation. Best,
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-27 18:46
(4902 d 02:35 ago)

@ Jamesmartinn
Posting: # 7738
Views: 9,519
 

 Misusing PowerTOST for superiority?

Hi Jamesmartinn!

❝ […] They replied asking me to take the weekend to try once more - the interview went very well and they seemed to like me. I have to submit something tomorrow.


❝ I'm really starting to think that these questions were vague on purpose? I'm wondering if they want me to respond back by saying additional information is required (for example, alpha = 0.05, or 0.01? for these questions?)


Difficult to perform such a kind of telediagnosis across the pond. Either they purposely asked catch questions (OK) or they think that these questions make sense (bad). In the latter case be prepared for a hard every day work life.

❝ In regards to the CV study, I'm going to try to locate the Chow and Liu source, it seems my best bet.


Since time is running out: Check your inbox. :-D

❝ For the last question, as you stated, I have no idea what the goal of the study was. What I posted in green text was literally what I got. Given that limited information (2 treatments, 1 Placebo), what would be the most likely purpose (superiority, inferiority, equivalence) ? I'm guessing which ever it is, it's in reference to the placebo group.


We can only guess. The most likely scenarios IMHO are:
  • Superiority case: A vs. placebo and B vs. placebo. Target of the study is to select the ‘better’ treatment.
  • Equivalence case: A vs. B, internal validation (superiority of both to placebo)

❝ I'm going to do a few scenarios for that question. However, the whole '20% difference between treatments' as an effect size really throws me off.


In power calculations we need Δ – the clinically relevant difference. In bioequivalence it was set in the early 1980s to 20% (by convention!). It might be narrower or wider as well. For NTIDs (narrow therapeutic index drugs) it’s 10% and in some regulations for HVDs/HVDPs (highly variable drugs / drug products) 25% or even 30%. Since AUC and Cmax follow a lognormal distribution we apply a multiplicative model (or an additive one on logs); therefore the conventional acceptance range in log-scale is [ln(1-Δ), ln(1/(1-Δ))] = [-0.2231, +0.2231]. These limits are symmetrical around zero and back-transformed we get 80–125%. The data of the third question don’t look like coming from a BE study. I would guess that’s a ‘classical’ clinical trial, parallel design, untransformed data.

❝ Could you demonstrate how this is taken into account in calculating 1 or 2 different scenarios? Let's pretend I'm comparing (i) treatment a vs placebo , (ii) treatment b vs placebo for either inferiority/superiority


Let’s see:
Treatment    n    Mean   SD    s²     CV
——————————————————————————————————————————
A            31   2.52  1.36  1.85   54.0%
B            30   2.13  1.30  1.69   61.0%
C (placebo)  16   0.69  1.01  1.02  146.4%


Pooled standard deviations are calculated according to: s0=√{[(n1–1)s²1+(n2–1)s²s]/(n1+n2–2)}; we get:
Comparison   s0   TR     CV
——————————————————————————————
A vs. C     1.25  1.83   68.5%
B vs. C     1.21  1.44   84.0%
A vs. B     1.33  0.39  341.2%


In clinical trials commonly a 95% CI (as opposed to the 90% in BE) is calculated: CI=TR±tα,n1+n2–2·s0·√[(n1+n2)/n1n2]; we get:
Comparison   Δ   tα,n1+n2–2    95% CI
——————————————————————————————————————
A vs. C     1.83  2.0141   1.05 – 2.61
B vs. C     1.44  2.0154   0.69 – 2.19
A vs. B     0.39  2.0010  -0.29 – 1.07


Locking at a clinical relevant difference of +20% to placebo (0.83=0.69×1,2), treatment A ‘works’ (lower CL>0.83) and B not (lower CL<0.83). Now for the tricky part: We must not assume equal variances – and according to FDA’s guidance should not even test for it. The t-test is fairly robust against deviations from normality but rather sensitive against imbalance – which we have here. Therefore we should apply Satterthwaite’s approximation of the degrees of freedom and use Welch’s t-test instead (note: that’s the default in R). We get:
Comparison   Δ     df    tα,df       95% CI
—————————————————————————————————————————————
A vs. C     1.83  39.09  2.0225   1.05 – 2.61
B vs. C     1.44  37.91  2.0246   0.68 – 2.20
A vs. B     0.39  58.99  2.0010  -0.29 – 1.07

No big deal with this dataset…

Time to fire up R:
require(PowerTOST)
power.RatioF(alpha = 0.025, theta1 = 0.8, theta0 = 2.52/2.13,
             CV = 3.412, n = 31+30, design = "parallel")
[1] 4.896124e-05

Not surprisingly power of the equivalence test A–B is terribly low (small difference, high CV). No idea how to tweak PowerTOST for the superiority cases. Maybe later on (can’t promise). I have a deadline approaching at 24:00 CET. :-(


Tried A > C:
power.RatioF(alpha = 0.025, theta1 = 1.2, theta2 = Inf,
             theta0 = 2.52/0.69, CV = 0.685, n = 31+16,
             design = "parallel")

… and got:
Error in checkmvArgs(lower = lower, upper = upper, mean = delta, corr = corr,  :  mean contains NA

According to help(pmvt):

Note that both -Inf and +Inf may be specified in the lower and upper integral limits in order to compute one-sided probabilities.


PowerTOST goes berserk in .power.RatioF attempting to calculate the correlation matrix – which ends up in Inf/Inf and NaN.

Detlew? :confused: Too stupid to setup a workaround.

Using the sledgehammer approach (starting with the upper limit set to theta0):
power.RatioF(alpha = 0.025, theta1 = 1.2, theta2 = 2.52/0.69,
             theta0 = 2.52/0.69, CV = 0.685, n = 31+16,
             design = "parallel")
[1] 0.025

Good. That’s α. Now let’s increase theta0 (since Inf doesn’t work):
theta0    power
  4      0.08441706
  5      0.4485689
  6      0.7621197
  8      0.9619132
 10      0.9920993


Maybe you can test your diplomatic skills (something I’m completely lacking of) and answer with something like:

Treatment A showed superiority to placebo (defined as a 20% increase): +1.83 (95% CI +1.05 ~ +2.61) since the lower CL>0.83. Treatment B showed +1.44 (+0.68 ~ +2.20). However, equivalence between A and B could not be excluded (+0.39, -0.29 ~ +1.07) since the CI includes zero.

Now for the meaningless part:

Power to show equivalence between A and B was only 4.9·10-5, XYZ (include the numbers from sampleN.RatioF here) subjects would be needed in a future study. Assuming a clinically relevant difference of 20% power to show superiority of both treatments to placebo was very high. :-D


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2011-11-27 16:14
(4902 d 05:07 ago)

@ Helmut
Posting: # 7737
Views: 8,844
 

 Strange questions…

Hi HS,


❝ Take it with a pinch of salt – maybe it’s total crap!


That table doesn't seem intuitive to me. The variability estimate should generally go down with increasing sample size, shouldn't it?. Perhaps I misunderstood some of your underlying ideas here; I don't have the book.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-27 19:07
(4902 d 02:14 ago)

@ ElMaestro
Posting: # 7739
Views: 8,834
 

 Strange questions…

Dear ElMaestro!

❝ That table doesn't seem intuitive to me. The variability estimate should generally go down with increasing sample size, shouldn't it?. Perhaps I misunderstood some of your underlying ideas here;


That’s the lower CL of the assumed CV of 35% – going up with increasing n. Or:
    n   CLlo    CLhi
—————————————————————
   10  29.78%  46.18%
   12  30.13%  44.66%
   14  30.41%  43.59%
   20  31.02%  41.67%
   50  32.29%  38.72%
  100  33.01%  37.49%
 1000  34.33%  35.72%
10000  34.78%  35.22%

Haven’t tried . :-D


P.S.: You have mail.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,667 registered users;
24 visitors (0 registered, 24 guests [including 5 identified bots]).
Forum time: 22:22 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5