Silva ☆ Portugal, 2014-05-19 12:57 (3988 d 02:17 ago) Posting: # 12967 Views: 17,520 |
|
Dear Helmut For a regular 2x2x2 study performed with two dosing days groups, I've wrote the following code on SAS: proc glm data=...; On the output for the group effect, degrees of freedom were calculated as 0 and type I SS and type III SS were calculated as 0.00000000. For formulation*group effect, DF were computed as 1 and p_value as 0.7228 (for both type I and III SS). I've also used phoenix with the following fixed effects: sequence + period + formulation + group + formulation*group and subject(sequence) as random effect and I have obtained on both sequential tests and partial tests the same p_value of 0.7840 for group effect and 0.7228 for the interaction formulation*group. Am I doing something wrong on SAS? Can you comment please! I'm very rookie with this software! Best regards Edit: Category changed. [Helmut] |
ElMaestro ★★★ Denmark, 2014-05-19 13:51 (3988 d 01:24 ago) @ Silva Posting: # 12968 Views: 15,350 |
|
Hello Sliva, ❝ For a regular 2x2x2 study performed with two dosing days groups, I've wrote the following code on SAS: ❝ On the output for the group effect, degrees of freedom were calculated as 0 and type I SS and type III SS were calculated as 0.00000000. For formulation*group effect, DF were computed as 1 and p_value as 0.7228 (for both type I and III SS). ❝ Am I doing something wrong on SAS? Can you comment please! I'm very rookie with this software! It may very well be correct. In your setup subject is nested in sequence which is nested in group. So you could instead speak of "Subject(Sequence x Group)" etc. The SS=0.0000000 is due to the fact that no additional columns for the group effect (a between-subject factor) can be forced into the design matrix. No idea how to interpret or use Treatment x Group if that combo had been significant. If you have some thoughts on that I'd be happy to hear your ideas. — Pass or fail! ElMaestro |
Silva ☆ Portugal, 2014-05-22 14:01 (3985 d 01:13 ago) @ ElMaestro Posting: # 12985 Views: 15,404 |
|
Hello El Maestro Many thanks for your input. I will think about the practical significance about treatment by group interaction and share my thoughts. Despite dealing quite well with phoenix, I really have difficulties to work on SAS. I use as reference the code that Helmut published in the White Paper to compare Phoenix and SAS procedures according to EMA Q&A position. The other dataset I've been trying to have an output from SAS with group effect gave me another phenomenon: SAS code proc glm data=work._test2; This time the output does not show values for the LSmeans for R and T as well as for the respective 90% confidence Limits. However, in the Least Squares Means for Effect form table, the Difference Between Means were calculated as well as the 90% Confidence Limits for the LSMean(i)-LSMean(j): Difference Between Means = 0.046662 LSMean(i)-LSMean(j) = -0.026738 to 0.120061 Am I doing something wrong again? Phoenix output is below: Least squares means for Reference = 4.60219 (SE = 0.0737145) Least squares means for Test = 4.64885 (SE=0.0737145) Difference Between Means = -0.0466616 Lower CI = -0.1201 and Upper CI = 0.02674 Difference between means for SAS is positive and for Phoenix is negative and Confidence Limits are the opposite. As the approved Statistical Analysis Plan refers that this SAS PROC model will be used, how can I solve this problem? I deeply thanks in advance all the help and input. Regards Silva Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut] |
d_labes ★★★ Berlin, Germany, 2014-05-22 16:11 (3984 d 23:04 ago) @ Silva Posting: # 12988 Views: 15,228 |
|
Dear Silva, Using your code:
model lnCmax = group sequence form subject(group*sequence) period(group) Not quite clear which denominator has to be choosen for all that terms. Hope this helps. @Helmut: This thread has nothing to to with Regulatives/Guidelines I think. You may consider shifting it. Done. [Helmut] — Regards, Detlew |
Silva ☆ Portugal, 2014-05-22 18:54 (3984 d 20:21 ago) @ d_labes Posting: # 12990 Views: 15,227 |
|
d_labes ★★★ Berlin, Germany, 2014-05-19 15:02 (3988 d 00:12 ago) @ Silva Posting: # 12969 Views: 15,352 |
|
— Regards, Detlew |
Silva ☆ Portugal, 2014-05-23 14:12 (3984 d 01:03 ago) @ Silva Posting: # 12992 Views: 15,184 |
|
Thanks for all the input. From all the discussions about group effect within the forum, a thought came to me? Do we really need to test Group effect? If not tested, what happens is the increase of MS error and therefore CI's are expected to be widened, which increases Producer's risk! Does FDA or EMA really ask for group effect? For Bioequivalence statistics (ANOVA and CI's calculation), is it Phoenix so acceptable as SAS for FDA? And for EMA? Rgs Silva Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut] |
ElMaestro ★★★ Denmark, 2014-05-23 14:31 (3984 d 00:43 ago) @ Silva Posting: # 12993 Views: 15,478 |
|
Hi Silva, ❝ From all the discussions about group effect within the forum, a thought came to me? Do we really need to test Group effect? If not tested, what happens is the increase of MS error and therefore CI's are expected to be widenned, which increases Producer's risk! Does FDA or EMA really ask for group effect? With Group being a between-subjects factor I am not sure group in itself affects any within-variability. Although I think the formulation x group interaction would steal a df and would be tested against the model residual (within). You are right that any step towards increasing the model residual translates into producer's risk. I need to look closer at the model matrix constructs to be sure, though. — Pass or fail! ElMaestro |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2014-05-23 14:47 (3984 d 00:28 ago) @ Silva Posting: # 12994 Views: 15,252 |
|
Hi Silva, ❝ For Bioequivalence statistics (ANOVA and CI's calculation), is it Phoenix so acceptable as SAS for FDA? It is a common misunderstanding that the FDA requires SAS for statistical evaluation of anything. Software (and code/macros within) have to be validated – that’s cumbersome. Only blackbox-validation is possible for commercial software. As the name already implies, open-source (like R) additionally offers access to the source-code… ❝ And for EMA? Everything except spreadsheets is acceptable. I’m using Phoenix (and WinNonlin before) for ages. Quote from the Q&A-document: 3.3. Alternative computer programs For hints about validation of software see the Guidelines / Guidances. PS: In the future please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Lucas ★ Brazil, 2014-11-14 13:21 (3809 d 00:54 ago) @ Helmut Posting: # 13867 Views: 14,398 |
|
Hi Helmut, how are you?! ❝ Everything except spreadsheets is acceptable. I’m using Phoenix (and WinNonlin before) for ages. Quote from the Q&A-document: 3.3. Alternative computer programs ❝ Results obtained by alternative, validated statistical programs are also acceptable except spreadsheets because outputs of spreadsheets are not suitable for secondary assessment. Did you validated all workflow templates that you use on WinNonlin or there wasn't need for that? Thx in advance. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2014-11-14 15:11 (3808 d 23:04 ago) @ Lucas Posting: # 13868 Views: 14,642 |
|
Hi Lucas, ❝ Hi Helmut, how are you?! THX; reasonably good. ![]() ❝ Did you validated all workflow templates that you use on WinNonlin… Good question! In the past I compared WinNonlin’s output obtained from datasets given in the literature (see here). I survived one inspection in 2004. ❝ …or there wasn't need for that? Software has to be validated, right? Unfortunately all these datasets were balanced crossovers. We recently published a paper comparing eight datasets evaluated by five different software packages – EquivTest/PK, Kinetica 5.0.10, SAS 9.2, Phoenix/WinNonlin 6.3.0.395, and R 3.0.2 (see here). We obtained identical results – with the exception of Kinetica, which screwed up all imbalanced datasets (PE & CI wrong). Thermo Fisher Scientific will correct the bug (see here). Amazingly one of the reviewers of the manuscript asked “Was the software used in the analyses validated?” which we answered “For the purpose of bioequivalence the softwares were not validated beyond installation and where possible automatic IQ/OQ/PQ. The point of the paper is that it is virtually impossible to do in absence of a range of datasets, and the paper makes a proposal on which datasets such a validation could be based.” Last Wednesday another paper dealing with 2-group parallel designs was accepted by the AAPS Journal. In PHX/WNL one needs a workaround to adjust for unequal variances (i.e., Welch Satterthwaite approximation) and the software is limited to 1,000 subjects/group (not an issue in BE). You can make an educated guess about how Kinetica performed. We “validated” PHX/WNL’s templates for FDA’s RSABE and EMA’s ABEL (see here) by comparing its results with SAS’. Still some work to be done. EMA’s fully replicated dataset is imbalanced (nRTRT ≠ nTRTR) and incomplete (periods missing) whereas the partial replicate is both balanced and complete. We learned (from examples John posted here and others in Pharsight’s Extranet) that FDA’s ABE code for the partial replicate fails to converge sometimes due to the overspecified model (both in PHX/WNL and SAS). It seems that FA0(1) instead of FA0(2) could do the job. CSH (suggested in FDA’s guidance) did not work for these nasty datasets.— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |