Shuanghe ★★ Spain, 20150925 16:48 (edited by Shuanghe on 20150925 17:16) Posting: # 15471 Views: 20,975 

Dear forum members, Recent discussion on the new FDA guidance of dabigatran and rivaroxaban led me to the statistical method mentioned in the warfarin Na guidance. It seems the method is almost the same as the one for HVDP in progesterone guidance, with difference regulatory constant and criteria of course. With the sas code example there, it's not difficult to calculate the 95% upper limit for BE evaluation. However, there's no sas code for the variability comparison so here are my questions. 90% CI for σ_{WT}/σ_{WR} was expressed as {(s_{WT}/s_{WR}) / SQRT(F(0.05, ν_{T}, ν_{R})), (s_{WT}/s_{WR}) / SQRT(F(0.95, ν_{T}, ν_{R}))} where,
OK, that's for now. I wish you all a wonderful weekend! — All the best, Shuanghe 
jag009 ★★★ NJ, 20150925 20:42 @ Shuanghe Posting: # 15473 Views: 18,257 

Hi, » 4. Since FDA request that only subject who complete all 4 periods should be included in the analysis, for obtaining Fdistribution value, it seems the degree of freedom v_{T} and v_{R} will always be equal, which is always N_{total}  2 for full replicate. Right? Where did you see this? I don't see it in the guidance. However, if you use FDA's SAS code (ie: Progesterone) to extract the variances etc etc, you WILL see that the code only takes subjects who completed both periods for Test and Ref. Example, any subjects who finished only 1T out of 2Ts will be dropped. Take a look at previous threads involving the Concerta guidance (I started that thread and a few joined in. Sorry I don't have time to set the link now. Need to go party). Lastly, as to why you get reverse upper and lower limit with the interval, I will try and goof around in SAS next week and see. John Edit: The threads John mentioned above: #13991, #14150. [Helmut] 
Shuanghe ★★ Spain, 20150928 15:05 @ jag009 Posting: # 15478 Views: 17,965 

Hi John, » Where did you see this? I don't see it in the guidance. However, if you use FDA's SAS code (ie: Progesterone) to extract the variances etc etc, you WILL see that the code only takes subjects who completed both periods for Test and Ref. Example, any subjects who finished only 1T out of 2Ts will be dropped. That's what I meant. By FDA's code, all subject has to complete all 4 periods. so In practice, number of subjects having T data is always equal number of to subjects having R data. By the way, I asked FDA about possibility of modify the code to include subject with 2 R and 1 T and for "ilat", the mean difference between T and R, use the modified version: ilat = lat1t  0.5*(lat1r+lat2r) , where lat1t could be replaced by lat2t depending on period of dropout. It should be easy to implement with IF/ELSE/THEN. Buy they confirmed 1 and a half years later that subjects should have data of all 4 periods and reject the suggestion. » Take a look at previous threads involving the Concerta guidance (I started that thread and a few joined in. Sorry I don't have time to set the link now. Need to go party). » Edit: The threads John mentioned above: #13991, #14150. [Helmut] Woo, long post! I'll definitely take a look at them later. Thanks. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20150930 08:52 @ Shuanghe Posting: # 15497 Views: 17,410 

Dear Shuanghe! » ... By FDA's code, all subject has to complete all 4 periods. so In practice, number of subjects having T data is always equal number of to subjects having R data. » » By the way, I asked FDA about possibility of modify the code to include subject with 2 R and 1 T and for "ilat", the mean difference between T and R, use the modified version: » ilat = lat1t  0.5*(lat1r+lat2r) , » where lat1t could be replaced by lat2t depending on period of dropout. It should be easy to implement with IF/ELSE/THEN. Buy they confirmed 1 and a half years later that subjects should have data of all 4 periods and reject the suggestion. What you describe is the part of the SAS code concerning (µTµR)^2 of the linearized scABE criterion (ilat analysis). For the part dealing with intrasubject variability the Progesterone / warfarin guidance SAS code (dlat analysis) does not automatically drop subjects having missings under Test treatment. What to do here? Retain subjects having 2R, but only 1T or 0T? Or also drop them as we have done in the R code in the post below? — Regards, Detlew 
Shuanghe ★★ Spain, 20150930 13:34 @ d_labes Posting: # 15502 Views: 17,341 

Dear Detlew, » What you describe is the part of the SAS code concerning (µTµR)^2 of the linearized scABE criterion (ilat analysis). You are right. » For the part dealing with intrasubject variability the Progesterone / warfarin guidance SAS code (dlat analysis) does not automatically drop subjects having missings under Test treatment. Yes. And I tried both with some of my studies. The difference is small but in borderline case it could mean pass or fail BE » What to do here? Retain subjects having 2R, but only 1T or 0T? Or also drop them as we have done in the R code in the post below? In addition to the scenarios you mentioned, what about subject with 0R + 2T? Theoretically they can provide information on ISCV for T. To make a list of what cannot be included is much easier,
So, for the moment I prefer using subjects with all 4 periods just to be safe (provided it was properly described in the protocol). Otherwise, situation would be more complicated. — All the best, Shuanghe 
Helmut ★★★ Vienna, Austria, 20150927 12:23 @ Shuanghe Posting: # 15474 Views: 18,302 

Hi Shuanghe, congratulations for mastering the nested lists in your post. I’m impressed! » 9. It seems LaTeX expression can not be used to write math equation here. Correct. » Is it possible to implement it? Theoretically yes.
alt="[image]" is displayed. GeSHi would come up with alt="\left ( \frac{s_{wR} / s_{wT} } {\sqrt{F_{\alpha/2}, \quad {\nu_1}, \quad {\nu_2}}}, \frac{s_{wR} / s_{wT} } {\sqrt{F_{1\alpha/2}, \quad {\nu_1}, \quad {\nu_2}}} \right )" . The former is noninformative and the latter confusing to anybody not familiar with the syntax of A_{M}S–_{}.BTW, simple formulas in the forum can be constructed by means of UTF8 characters and BBCodes, e.g., CV_{w} = √ℯ^{σ²w} − 1 » Since there are so many members here … Don’t overestimate that. This year so far only 122 of them were active (at least one post). The forum is a rather exclusive club; ten nerds (0.1%) are guilty for 55% of all posts. » … and some times one has to write complicate formula it might be a good idea to be able to write equations in LaTeX. I’m afraid users in the forum being knowledgable of _{} are in the minority… BTW, I once wrote a manuscript in MiKT_{E}X according to the publisher’s templates only to learn that they required bloody M$Word in the meantime. Although I used MathType don’t ask me how the paper looked at the end. Edit: With the MathJax library (installed June 2019): $$\left(\frac{s_{wR}/s_{wT}} {\sqrt{F_{\alpha/2},\quad{\nu_1},\quad{\nu_2}}},\frac{s_{wR}/s_{wT} }{\sqrt{F_{1\alpha/2},\quad{\nu_1},\quad{\nu_2}}}\right)$$ $$C{V_w}=\sqrt{{e^{\sigma _w^2}}1}$$ — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Shuanghe ★★ Spain, 20150928 15:32 @ Helmut Posting: # 15479 Views: 17,844 

Hi Helmut, » congratulations for mastering the nested lists in your post. I’m impressed! » Theoretically yes.
That's way too more complicated than I thought. I'll just upload png if it's necessary in future. » I’m afraid users in the forum being knowledgable of _{} are in the minority… BTW, I once wrote a manuscript in MiKT_{E}X according to the publisher’s templates only to learn that they required bloody M$Word in the meantime. If they need $Word, why bother provide LaTeX template? No, the correct questions is since they have LaTeX template, why bother requiring $Word at all? TeX produce document with much better quality. » Although I used MathType don’t ask me how the paper looked at the end. By the way, I have TeX Live installed and use TeXStudio as editor. It works great. I once send 12 pages pdf answering a regulatory question about multivariate statistical distance method for dissolution comparison with tuftehandout class with many side notes illustrations. Very "professional printout" TeX rocks! — All the best, Shuanghe 
Helmut ★★★ Vienna, Austria, 20150928 15:46 @ Shuanghe Posting: # 15481 Views: 17,860 

Hi Shuanghe, » […] I'll just upload png if it's necessary in future. It is possible to get a PNG from one of Google’s tools. Paste the T_{E}Xcode after https://chart.apis.google.com/chart?cht=tx&chl= Open this link in a new tab:Not perfect, but OK. It’s a 24 bit PNG. » » […] I once wrote a manuscript in MiKT_{E}X according to the publisher’s templates only to learn that they required bloody M$Word in the meantime. » If they need $Word, why bother provide LaTeX template? » No, the correct questions is since they have LaTeX template, why bother requiring $Word at all? You missed the phrase “in the meantime”. Of course they removed the T_{E}Xstuff from their website and provided M$DOTs instead. Why? Compare the userbase of M$Word with the one of all flavors of T_{E}X… » TeX produce document with much better quality. Absolutely. » […] I have TeX Live installed and use TeXStudio as editor. I will give it a try! — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20150928 07:45 @ Shuanghe Posting: # 15475 Views: 17,971 

Dear Shuanghe, » However, there's no sas code for the variability comparison ... Wow! Seems correct. Was not aware of this fact. At moment I don't have answers to all of your other questions but only to your points 1., 2. and 4. Lets start with 2. and 4, the simpler ones: » 2. From the sas code for s_{WR} (from D_{ij}=R_{1}R_{2} then running with PROC MIXED, output CovParms, and calculate s_{WR} = SQRT(estimate/2)) , is it correct to assume that we need to modify the code to have something like D_{ij}T = T_{1}  T_{2} , then run the same procedure as for s_{WR} to calculate s_{WT}? Correct. » 4. Since FDA request that only subject who complete all 4 periods should be included in the analysis, for obtaining Fdistribution value, it seems the degree of freedom vT and vR will always be equal, which is always Ntotal  2 for full replicate. Right? Correct for full replicate 4periods with 2 sequences with mentioned precondition. Now to the mysterious lower/upper (upper/lower?) CI of the variabilities (your point 1.): The answer lies hidden in the text of the Warfarin guidance. "... F_{α/2,ν1,ν2} is the value of the Fdistribution with ν_{1} (numerator) and ν_{2} (denominator) degrees of freedom that has probability of α/2 to its right." (Emphasis be me. Similar text for F_{1α/2,ν1,ν2}). Example in R speak with df1=12, df2=12, alpha/2=0.05: F1 < qf(0.05, df1=12, df2=12, lower.tail=FALSE) gives F1 = 2.686637. This is the same as F1 < qf(10.05, df1=12, df2=12, lower.tail=TRUE) Usually (and in SAS) the probability to the left is given back (integral over Fdensity from 0 to F). # lower.tail=TRUE is default gives F2 = 0.3722125. SAS: Data Finv; gives also F1 = 2.686637, F2 = 0.3722125. Hope this helps. — Regards, Detlew 
Shuanghe ★★ Spain, 20150928 15:39 @ d_labes Posting: # 15480 Views: 17,826 

Dear Detlew, » Lets start with 2. and 4, the simpler ones: » Correct for full replicate 4periods with 2 sequences with mentioned precondition. Thanks for the confirmation. » Now to the mysterious lower/upper (upper/lower?) CI of the variabilities (your point 1.): » The answer lies hidden in the text of the Warfarin guidance. » "... F_{α/2,ν1,ν2} is the value of the Fdistribution with ν_{1} (numerator) and ν_{2} (denominator) degrees of freedom that has probability of α/2 to its right." (Emphasis be me. Similar text for F_{1α/2,ν1,ν2}). Thanks!!! Didn't noticed that. » Hope this helps. Yes! — All the best, Shuanghe 
Shuanghe ★★ Spain, 20150928 16:57 @ Shuanghe Posting: # 15482 Views: 18,199 

Dear all, When I said it seems the analysis is doable in R I made at least 2 mistakes.
# read data from website, variable names: subj, per, seq, treat, pk, logpk So to summarise, I can get s_{WR}, s_{WT}, 90% CI for variability comparison (Thanks to Detlew), but not 95% upper limit for (Y_{T}  Y_{R})^{2}  θ * s^{2}_{WR}. I'm sure the error comes from the mean T and R. Any thought? — All the best, Shuanghe 
Helmut ★★★ Vienna, Austria, 20150929 00:22 @ Shuanghe Posting: # 15489 Views: 17,781 

Hi Shuanghe, » # remove subjects who has imcomplete periods. More elegant solution? » library(dplyr) Didn’t know this package. The syntax within some functions (i.e., the %>% operator) is – well – unconventional.Code to keep only subjects completing all four periods:
Which subjects are incomplete? print(sort(names(incomp))) gives
summary(d[2:4]) gives
print(sort(names(incomp)))
summary(d[2:4])
Edit: Once I saw Detlew’s code below, I can only say about my snippet: Forget it! — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Shuanghe ★★ Spain, 20150929 16:44 @ Helmut Posting: # 15494 Views: 17,535 

Hi Helmut, » Didn’t know this package. The syntax within some functions (i.e., the %>% operator) is – well – unconventional.The package was widely talked about recently on various forums and supposedly very powerful. Well, for those who mastered it I guess, which I'm not obviously. » Edit: Once I saw Detlew’s code below, I can only say about my snippet: Forget it! For beginner like me, definitely I can learn from both. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20150930 06:44 @ Shuanghe Posting: # 15496 Views: 17,544 

Dear Shuanghe, dear Helmut, » » Didn’t know this package. The syntax within some functions (i.e., the %>% operator) is – well – unconventional.» » The package was widely talked about recently on various forums and supposedly very powerful. Well, for those who mastered it I guess, which I'm not obviously. There is much hype within the Ruser community around Hadley Wickham (has even an entry in Wikipedia) and the wealth of packages  named "Hadleyverse"  he has written, among them prominently plyr / dplyr .Hadley has got headlines like "The man who revolutionized R" and has got fans all over the world. I myself don't understand this hype. I find his packages at least in part not so easy to use or understand. What I really love is devtools for package development (integral part of RStudio, the IDE which no Ruser/developer should miss) and his book "Advanced R".— Regards, Detlew 
d_labes ★★★ Berlin, Germany, 20150929 12:26 @ Shuanghe Posting: # 15490 Views: 18,109 

Dear Shuanghe, dear all, here my two cents (only base R, code mostly comments ): # Get the data according to Shuanghe Encapsulating this in a function, adding the BE decision (critbound <0 and sRatioCI[2]<=2.5 in case of NTID's) and output (printing) of details for use in a statistical report is left as your homework . BTW: EMAI is not NTID with low variability but HVD ( CVwR = se2CV(swR) ).— Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20150929 14:31 @ d_labes Posting: # 15492 Views: 17,656 

Dear Detlew, Chapeau! I love the way you recode the data set (especially how you introduced the replicates & NAs for later use). I wasn’t aware of reshape() … Amazing coding skills!— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Shuanghe ★★ Spain, 20150929 16:37 @ d_labes Posting: # 15493 Views: 17,557 

Dear Detlew, Master !!! » ... (only base R, Even better! I remembered that there was a presentation or file about R for regulated use (or called something similar, couldn't remember. I'll search it later) indicating FDA was using R for analysis and recommended (?) only base R. » Encapsulating this in a function, adding the BE decision (critbound <0 and sRatioCI[2]<=2.5 in case of NTID's) and output (printing) of details for use in a statistical report is left as your homework . I'll try » BTW: EMAI is not NTID with low variability but HVD ( CVwR = se2CV(swR) ).I know. But I don't have any project with NTID (one of the reasons I waited until now to read the warfarin guidance in detail) so for coding and validating purpose EMA data should be ok. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20150929 19:55 @ Shuanghe Posting: # 15495 Views: 17,461 

Gentlemen (and Ladies to be politically correct!), don't praise me to the skies (German: "Über den grünen Klee loben") . Not so simple, but also no so complicated. Only tried to reinvent the SAS code in R, one to one. — Regards, Detlew 
gvk ☆ India, 20190521 09:36 @ d_labes Posting: # 20287 Views: 3,122 

» # Get the data according to Shuanghe » emadata < "https://dl.dropboxusercontent.com/u/7685360/permlink/emabe4p.csv" Dear Detlew, Can you share the .CSV raw data file used in the program. I am not able to download from the link which used in the program. Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! Subject line changed; see also this post #2. [Helmut] 
M.tareq ☆ 20170414 13:04 @ Shuanghe Posting: # 17239 Views: 12,820 

Dear all, notation clarification for a newbie, using EMA's sample data set I full replicate after removing the subject that didn't complete all periods regarding this post : ddubins post for calculation of CVintra in full replicate design using FDA SAS code provided in appendix E FDA Guidance according to ddubins entry about Covariate Parameter Estimates: Residual Subject Treatment A 0.124 < (sig_WT^2, the withinsubject standard deviation) for the Test product however when using the code given in fda guidance for warfrain sodium and following the same idea here to Calculate the 90% confidence interval of the ratio of the within subject standard deviation of test product to reference product σWT/σWR by using the residual terms in covariates estiames to calculate the upper bound as follows: guidance for warfrain sodium and calculating the SWr for the critical bound ,the numbers aren't same , Covariance Parameter Estimates and Swr obtained from proc mixed step same idea as Shuanghe there's slight difference CovParm Estimate s2wr theta y boundy sWR critbound my first question is and sorry for the silly long post,the residual terms in covariates estimates is an estimate of the variance for both Test and reference and not the standard deviation as ddubins said? and that means the calculation will be as follows : sqrt(Residual Subject Formula R) and same for Test to obtain the variability ratio? 2nd question, Regarding Helmut lecture about Reference Scaled Average Bioequivalence (part II: NTIDs) RSABE NTIDs using the data set provided: CNS drug data set when using the fda code for NTID mentioned above in warfarin guidance i get the following values: Cov Parm Subject Group Estimate and CVwt=SQRT(EXP(0.003281)1) > 5.73% CL: 93.90%  103.35% when using these estimates to obtain the upper bound for variability comparison i get : 0.60080159 the number checks out with what's in your lecture regarding EMA values but why there's a difference between FDA values ? Thanks in advance and Best Regards 
M.tareq ☆ 20170414 23:01 @ M.tareq Posting: # 17241 Views: 12,714 

after watching Jenifer lewrence in passengers i got it, the reason for the different values i got from Helmuts lecture Thanks, Mis Jenifer and sorry for the long post but can anyone please explain the difference between the estimates values and the one obtained from the calculation using proc mixed 