ElMaestro
★★★

Denmark,
2011-09-21 19:04
(4594 d 17:50 ago)

Posting: # 7383
Views: 16,262
 

 Welch/Satterthwaite in practice [General Sta­tis­tics]

Dear all,

I am fiddling a bit with some spreadsheets and wish to construct a 90% CI from parallel designs and where I optionally wish to assume either equal or unequal variances. So Welch/Satterthwaite's equation for calculation of DF's come into play for the unequal situation. I further wish to get support for my calculation via R.
This causes a bit of grief.

Let's say the calculated df is 5.123. Yes, not an integer. Would you
1. Round up? (no, you would never do that because it is not conservative in nature!)
2. Round down?
3. Play the R game:
TINV in my spreadsheet (OpenOffice) will round down to 5. But, and this is the tricky part, if we in R use t.test with var.equal=FALSE to get our confidence interval then the confidence interval is calculated via the qt function, and in R that function accepts non-integer df's:
qt(1 - 0.05/2, 5.123)
[1] 2.552129
qt(1 - 0.05/2, 5)
[1] 2.570582
qt(1 - 0.05/2, 6)
[1] 2.446912

You can take a look at the inner works of the t.test function with:
getAnywhere("t.test.default")

I am inclined to say it makes sense to round the df's down to nearest integer (option 2), in which case I do not see a way to pseudo-validate against R when the function t.test is used.

How would you think the df's should be treated?


ElMaestros.Alternative=function(TEST,REF, Round.DF.Down=TRUE)
## constructs a 90% CI for test:ref using Welch-Satterthwaite's correction
## note: this function has not been partcularly smartified.
{
  if (length(TEST)<4) stop("Too few observations for TEST.");
  if (length(REF)<4) stop("Too few observations for REF.");
  sx=sqrt(var(TEST)/length(TEST))
  sy=sqrt(var(REF)/length(REF))
  sd <- sqrt(sx^2 + sy^2)
  df <- sd^4/(sx^4/(length(TEST) - 1) + sy^4/(length(REF) - 1))
  if (Round.DF.Down==TRUE) df=floor(df)
  alpha = 0.05
  tstat=(mean(TEST)-mean(REF)) / sd
  L=(tstat- qt(1 - alpha, df))*sd
  U=(tstat+ qt(1 - alpha, df))*sd
  cat("Lower 90% CI limit:", L, '\n');
  cat("Upper 90% CI limit:", U, '\n');
  if (Round.DF.Down==TRUE) cat ("Rounded DF =", df,"\n")
     else  cat ("Unrounded DF =", df, "\n")
}

Some random values I just made up:
TEST=c(30.5, 36.8, 39.2, 35.9)
REF=c(33.1, 37.1, 34.9, 35.0, 36.7)


Here's how the t.test function will do it:
t.test(x=log(TEST), y=log(REF), var.equal=FALSE, conf.level=0.9)

Here's the reproduction using the function above:
ElMaestros.Alternative(log(TEST), log(REF), Round.DF.Down=FALSE)


Here's the alternative and conservative way with integer-rounded df:
ElMaestros.Alternative(log(TEST), log(REF), Round.DF.Down=TRUE)

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-09-22 11:11
(4594 d 01:43 ago)

@ ElMaestro
Posting: # 7385
Views: 14,639
 

 Not to round is the question

My dear captain,

where is your problem :ponder:?

All serious statistical software I'm aware of have functions for the t-quantile which accepts non-integer degrees of freedom.

So why round/truncate the df's in any way? If you can get serious statistical software for free (R project)?

We know that the Welch-Satterthwaite procedure is an approximate solution to the problem of testing means with unequal variances. Do we need an approximation to the approximation?

BTW: Seems Martin's theorem "Never never never never use Excel. Not even for calculation of arithmetic means." may be extended to OpenOffice Calc :-D?

Regards,

Detlew
ElMaestro
★★★

Denmark,
2011-09-22 14:34
(4593 d 22:20 ago)

@ d_labes
Posting: # 7386
Views: 14,507
 

 my problem

Dear d_labes,

Thanks for your response.

❝ where is your problem :ponder:?


Simple: Spreadsheet is the option and that's not my decision.

I had several requests very recently, all with some variations over the parallel comparisons theme. Since this isn't real PK I generally don't know the distributions either so my own pref. would be bootstrapping. But I will save that for another thread :-D

Again, thanks for your expert input here.

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-09-27 13:54
(4588 d 23:00 ago)

@ ElMaestro
Posting: # 7391
Views: 14,428
 

 Ways out

Dear EM,

❝ Simple: Spreadsheet is the option and that's not my decision.


Some ways to overcome the limitations of the spreadsheet implementations of the t-distribution:
  • Use Excel and the VB code for the non-central t-distribution contained in Dave Dubbins Fartssie.xls. According to the comments within it, it should be able to handle fractional df's.
  • Use RExcel, an add-in for Excel on MS Windows, that allows to transfer data between R and Excel, writing VBA macros using R as a library for Excel, and calling R functions.
  • If MS Excel is not an option but OO Calc use "R and Calc", an OpenOffice.org (OO.o) Calc add-on that allows you to use R methods and functions.
To be honest, I have no experience with any of these suggestions.

BTW: The EXCEL built in function tinv also truncates fractional degrees of freedom.

Regards,

Detlew
yicaoting
★    

NanKing, China,
2011-09-29 18:29
(4586 d 18:25 ago)

@ ElMaestro
Posting: # 7395
Views: 14,522
 

 Welch/Satterthwaite in practice

Dear ElMaestro,
I think the key point of your task is the calculation of TINV(p, df) where df is not an integer in Excel.

Generally, it is impossible to do it.
But fortunately, you can use the FUNCTION NcTinv() which is nearly identical to Tinv() in SAS.

FUNCTION NcTinv() was developed by Dr. Russel Lenth, and could be accessed via FARTSSIE17, it is a user-defined function for Excel.

Usage of FUNCTION NcTinv() in Excel need some basic understanding of VBA for MS Excel.

Wish it help you.

BTW: I have tried it, the result was satisfactory.

My English is not very smooth, hope you can understand my posting.
ElMaestro
★★★

Denmark,
2011-09-29 19:39
(4586 d 17:15 ago)

@ yicaoting
Posting: # 7396
Views: 14,463
 

 Welch/Satterthwaite in practice

Dear yicaoting & d_labes,

thanks for your kind advice.
Real life can be very cruel: I do not have SAS or Excel. I only have OpenOffice and R. Client-side the situation is typically Excel* and sometimes without admin. privileges to install anything else.

As far as I can see, OpenOffice Calc seems to have some limited options for programming in basic (not visual, though). It seems a slight waste of time to me if I make some code in OpenOffice Calc basic here which does not execute on the client's Excel machines. It's difficult to check in the absence of Excel here :-D
If I could find a portable (I mean portable between OOC and Excel) way to call a DLL, then I can prolly write the code in C or Delphi for that DLL and make it do approximately what I want. But that's an uphill struggle. Please do let me know if you have ideas along these lines.


*: Validation aspects deliberately left out ;-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-09-30 18:52
(4585 d 18:02 ago)

@ yicaoting
Posting: # 7397
Views: 14,802
 

 Spreadshit addiction

Dear yicaoting & ElMaestro!

❝ FUNCTION NcTinv() […]


❝ BTW: I have tried it, the result was satisfactory.


Hhm, but how?
I must confess that I have only a somewhat outdated version of Excel (2000), but I had no problems in running FARTSIE.xls.
In FARTSSIE I added a new sheet and 5, 5.123, 6 to A1:A3 and B1:B3 =TINV(0.05,A1…A3).
I get (as expected) 2.57058, 2.57058, 2.4469. If I add to C1:C3 = NCtInv(0.05,A1…A3) the debugger throws an error in VBA Module 1Error in compiling: Sub or Function not defined pointing to
NCtInv = Solve("tDummy", p, x, y) in
Public Function NCtInv(p As Double, df As Double, Optional Delta) As Double
Solve() calls (?) the Solver-Add-in (Solver.xla), which I have installed (proof: FARTSSIE works fine…). The noncentrality parameter Delta is optional – don’t know what’s going on here (IMHO without noncentrality this section of the code should not be called at all).

BTW, NCt() works fine:
NCt(2.570582,5)     → 0.97500
NCt(2.552129,5.123) → 0.97500
NCt(2.446912,6)     → 0.97500


if compared to R:
t  <- c(2.570582,2.552129,2.446912)
df <- c(5,5.123,6)
pt(t,df)
[1] 0.975 0.975 0.975


@ElMaestro:
A crude workaround based on the first section on the macro…
df in cells A1:A3 (5, 5.123, 6).
In cells B1…B3 =SQRT(A1…A3*BETAINV(1-0.05,0.5,A1…A3/2)/(1-BETAINV(1-0.05,0.5,A1…A3/2))). I got:

df      Excel 2000  OO Calc 3.0.10    R 2.13
5        2.570578      2.570582      2.570582
5.123    2.552128      2.552129      2.552129
6        2.446912      2.446912      2.446912


The VBA-section for fractional df without noncentrality shift is:
If p = 0.5 Then p = 0.5000000001
  If p < 0.5 Then
    x = Application.BetaInv(1 - 2 * p, 0.5, df / 2)
    NCtInv = -Sqr(df * x / (1 - x))
  Else
    x = Application.BetaInv(1 - 2 * (1 - p), 0.5, df / 2)
    NCtInv = Sqr(df * x / (1 - x))
End If

You should be able to adapt the lines above into a series of nested if()s into OO calc (like I did with the crowbar above).


P.S.: Don’t know what I’m doing here. Just fiddlin’ around. :smoke:

Edit: Toys for boys. The nested Ifs in the VBA-code as a one-liner gives a pretty long formula. Use the simple one from above only for α 0.05. Results are identical to the 8th digit; It tried dfs between 1 and 6. OO calc performed slightly better than Excel 2000 (maybe this issue disappeared in later versions - see here). The largest deviation I got was for t(0.05,<2) ~0.00002. Funny that the formula based on the inverse β-distribution outperformed the one on the inverse t in Excel:
0.05, df=1 (diff to R and OO calc)
based on BETAINV() 12.70618521 (-0.00001953)
based on TINV()    12.70615030 (-0.00005444)

The full story: in A1 df, in B2 p=1-α/2, in C2 =IF(OR(B1<=0,B1>=1,A1<=0),NA(), IF(B1=0.5, SQRT(A1*BETAINV(1-2*(1-0.5000000001),0.5,A1/2)/(1-BETAINV(1-2*(1-0.5000000001),0.5,A1/2))), IF(B1<0.5,-SQRT(A1*BETAINV(1-2*B1,0.5,A1/2)/(1-BETAINV(1-2*B1,0.5,A1/2))), SQRT(A1*BETAINV(1-2*(1-B1),0.5,A1/2)/(1-BETAINV(1-2*(1-B1),0.5,A1/2))))))
  1. p ≤0 | p ≥1 | df ≤0 throw a #NA
  2. workaround for p=0.5 (preventing error in BETAINV(0;0.5;A1/2)
  3. p <0.5
  4. p >0.5 (else)
Test case for your famous df=5.123
df       p      Excel 2000    OO Calc      R
5.123   0.000          #NA          #NA         -Inf
5.123   0.025  -2.55212813  -2.55212890  -2.55212890
5.123   0.500   0.00000038   0.00022634   0
5.123   0.975   2.55212813   2.55212890   2.55212890
5.123   1.000          #NA          #NA          Inf
0       0.975          #NA          #NA          NaN


I hate Microsoft’s partly translations – which were carried over to OO Calc. On a German system NA()NV(), IF()WENN(), SQRT()WURZEL().

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2011-09-30 20:54
(4585 d 16:00 ago)

@ Helmut
Posting: # 7398
Views: 14,490
 

 what a divine post!

Dear HS,

❝ @ElMaestro:

❝ A crude workaround based on the first section on the macro…

❝ df in cells A1:A3 (5, 5.123, 6).

❝ In cells B1…B3 =SQRT(A1…A3*BETAINV(1-0.05,0.5,A1…A3/2)/(1-BETAINV(1-0.05,0.5,A1…A3/2))).


Weeeeeeeeeeeeeeeeeeeeeeeeeeee. That's it! I love you. It isn't crude, it's fantastic. Will you marry me? Or may I humbly kiss your shoe soles? This is really really great.
Thanks a lot and have a good weekend.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-09-30 21:41
(4585 d 15:13 ago)

@ ElMaestro
Posting: # 7399
Views: 14,282
 

 what a divine post!

Dear ElMaestro!

❝ That's it! I love you. […] Will you marry me?


Well. May I sleep on it? Have to ask my girlfriend for permission first. Haven’t tried bigamy yet.

❝ Or may I humbly kiss your shoe soles?


Wait a minute. See the edit of my last post first. OO calc reproduces R perfectly. I would say you may use the simple formula and check with your client whether the results are OK in later versions of M$ Excel. Anyhow I wouldn’t worry about a difference between spreadshits in the sixth decimal place (the largest I saw for df >2).

If you prefer wearing a belt+suspenders opt for the 280 character-monster. :-D


P.S.: Congrat’s to your post #400!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2011-10-01 15:03
(4584 d 21:51 ago)

@ Helmut
Posting: # 7402
Views: 14,140
 

 what a divine post!

Dear HS,

❝ P.S.: Congrat’s to your post #400!


Oh dear... so many words, so little substance.
EM.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-10-01 15:20
(4584 d 21:34 ago)

@ ElMaestro
Posting: # 7404
Views: 14,170
 

 what a divine post!

Dear ElMaestro!

❝ Oh dear... so many words, so little substance.


How can you say that Remember e.g. this very substantial and insightful one?

[image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-10-01 07:36
(4585 d 05:18 ago)

(edited by yicaoting on 2011-10-01 08:13)
@ Helmut
Posting: # 7400
Views: 14,354
 

 Spreadshit addiction

Dear HS,

Thank you very much for your comparison of Dr. Russel Lenth's NctInv() in Excel with OO Calc and R.

Since Delta is a optional var in Dr. Russel Lenth's NctInv(), in case of data without NC variable, you can recode
NCtInv = Solve("tDummy", p, x, y)
as
'NCtInv = Solve("tDummy", p, x, y)
to omit the execution of this code, then NctInv() will work well in your spreadsheet.

The difference between Dr. Russel Lenth's NctInv() in Excel and R may be caused by the different results of BetaInv() function generated with Excel and R.
Since I didn't install R in my computer, here I can only compare BetaInv() function in Excel with that in SAS.

Excel 2003:
=BETAINV(0.001,2,4)=0.010101318359375

SAS 9.1:
=BETAINV(0.001,2,4)=0.0101017879

OO Calc 3.3
=BETAINV(0.001,2,4)=0.0101017878837378

Nevertheless, Dr. Russel Lenth's NctInv() is a highly useful UDF for our calculation, although there may be a very very very small bias. It should be noted this bias is not cause by the algorithms itself but by the Excel's BETAINV(). But the lucky thing is MS has improved the precision of BETAINV() from 0.0000003 in Excel 2003 to about 10^(-15) in Excel 2007. Excel statistical functions: BETAINV


Based on the result of R, It seems that BETAINV() in OO Calc has higher precision than that in Excel.

Dear HS, do you agree with me?

BTW: Today is our National Day (since 1949-Oct-1), the whole country is full of happiness. I can't help myself to give my best wishes to all of you and your famaily and your country.
ElMaestro
★★★

Denmark,
2011-10-01 13:13
(4584 d 23:41 ago)

@ yicaoting
Posting: # 7401
Views: 14,445
 

 Spreadshit addiction

Dear yicaoting and HS,

❝ Based on the result of R, It seems that BETAINV() in OO Calc has higher precision than that in Excel.


For my part I am very happy now that I have a solution, and it isn't of paramount importance to have 17 correct decimals. If a client of mine should ask for 17 decimals I would most likely tell him to go [to protect minors this sentence has been partly and pirately censored] himself :pirate:.

Decisions re. BE are made on basis of two decimals when we talk ordinary BE, so one can try and argue the same in my case.

I cannot find info on the actual implementations in neither Excel nor OO. A great paper comparing certain numeric precision aspects here.

Couldn't help laughing when I read page 19 ("Microsoft claims...."), by the way. Life is great. But then again, it is my impression that people at SAS aren't particularly informative either re. their implementations. That's why R so rules, imho.

Happy national day, yicaoting.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-10-02 05:20
(4584 d 07:34 ago)

@ ElMaestro
Posting: # 7406
Views: 14,112
 

 GNU addiction

Dear ElMaestro!

❝ A great paper comparing certain numeric precision aspects here.


THX for reminding about this paper. Had to give Gnumeric (1.10.16) another try. Wow! Have a look at its functions. tinv() does not round dfs!
df     tinv(0.05,df)
5      2.5705818356364616
5.123  2.5521289049159468
6      2.4469118511451238


There are two implementations of the inverse β-distribution: betainv() for Excel/OO Calc compatibility and r.qbeta() calling R’s algo. However, results are identical:
betainv(0.001,2,4)=r.qbeta(0.001,2,4)=0.010101787883737754

Now to your example:
[image]

Looking at the formulas behind it’s clear: Welcome to Welch’s test!
[image]

        Welch Two Sample t-test

data:  log(TEST) and log(REF)
t = 0.0596, df = 3.876, p-value = 0.9555
alternative hypothesis: true difference in means is not equal to 0
90 percent confidence interval:
 -0.1196929  0.1265074
sample estimates:
mean of x mean of y
 3.568160  3.564752



[image] Rulez!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-10-01 15:03
(4584 d 21:51 ago)

@ yicaoting
Posting: # 7403
Views: 14,294
 

 Spreadshit addiction

Dear yicaoting!

❝ Since Delta is a optional var in Dr. Russel Lenth's NctInv(), in case of data without NC variable, you can recode

NCtInv = Solve("tDummy", p, x, y)

❝ as

'NCtInv = Solve("tDummy", p, x, y)

❝ to omit the execution of this code, then NctInv() will work well in your spreadsheet.


Yes, right. I was wondering why the VBA compiler had problems, since the Else-part of the code should not be triggered (IsMissing(Delta) = TRUE).

❝ The difference between Dr. Russel Lenth's NctInv() in Excel and R may be caused by the different results of BetaInv() function generated with Excel and R.


Correct. As ElMaestro noted above one of the wonderful things in R is the documentation (including the algorithms used; see here).

❝ Since I didn't install R in my computer […]



Get it from one of the mirror sites close to you. Big fun.

❝ […] here I can only compare BetaInv() function in Excel with that in SAS.

❝ Excel 2003:

❝ =BETAINV(0.001,2,4)=0.010101318359375

❝ SAS 9.1:

❝ =BETAINV(0.001,2,4)=0.0101017879

❝ OO Calc 3.3

❝ =BETAINV(0.001,2,4)=0.0101017878837378


Results in Excel 2000 = Excel 2003. My numbercruncher’s (R 2.13.0, options(digits=22)) nitpicking results:
qbeta(0.001,2,4)
[1] 0.01010178788373775407572
:-D

❝ Nevertheless, Dr. Russel Lenth's NctInv() is a highly useful UDF for our calculation, […]



Sure. It helped me to come up with the 280 character monster formula which can be used in Excel/OO Calc without active scripting (which might be the only option in some companies).

❝ Based on the result of R, It seems that BETAINV() in OO Calc has higher precision than that in Excel.

❝ Dear HS, do you agree with me?


Absolutely.

[image] Happy National Day!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-10-01 18:39
(4584 d 18:15 ago)

@ Helmut
Posting: # 7405
Views: 14,488
 

 Spreadshit addiction

Dear HS and ElMaestro, thanks for ElMaestro's sharing of that great paper and HS's useful links for R.
To improve the precision of Dr. Russel Lenth's NctInv() in Excel, I have to find a solution to improve the Excel's BetaInv() function instead of direct using of BetaInv() in MS Excel.

I have tested the BetaInv(0.001,2,4) in MS Excel 2007, it returns 0.010101318359375 which is identical to that in Excel 2003. How disappointing with MS's so called improvment. Actually, they did nothing! I will test it in MS Excel 2010 if possible.

I found a solution to improve the precision of Excel's BetaInv() function to a level of MS's so called 10^(-15). It is based on Excel's built-in BetaDist() function which has obviousely higher precision than BetaInv().

'-----------------------------------------------------------------
Public Function NCtInv(p As Double, df As Double) As Double
    Dim X As Double, y As Double, Incr As Double
    If (p <= 0) Or (p >= 1) Or (df <= 0) Then
        MsgBox "Illegal argument", vbCritical, "NCtInv"
        Exit Function
    End If
        If p = 0.5 Then p = 0.5000000001
        If p < 0.5 Then
            'X = Application.BetaInv(1 - 2 * p, 0.5, df / 2)
            X = BetaInverse(1 - 2 * p, 0.5, df / 2)
            NCtInv = -Sqr(df * X / (1 - X))
        Else
            'X = Application.BetaInv(1 - 2 * (1 - p), 0.5, df / 2)
            X = BetaInverse(1 - 2 * (1 - p), 0.5, df / 2)
            NCtInv = Sqr(df * X / (1 - X))
        End If
End Function
'-----------------------------------------------------------------
'-----------------------------------------------------------------
Public Function BetaInverse(p As Variant, Alpha As Variant, Beta As Variant) As Variant
Dim X As Variant
Dim A As Variant
Dim B As Variant
Dim Precision As Variant
X = 0
A = 0
B = 1
Precison = 10 ^ (-15)
Do While B - A > Precision
    X = (A + B) / 2
        If Application.BetaDist(X, Alpha, Beta) > p Then
            B = X
        Else
            A = X
        End If
Loop
BetaInverse = X
End Function
'-----------------------------------------------------------------

How to quickly leave a blank at the head of each row in BEBAC? My code is with blank at several rows, but after I submit the post, the blanks disappeared. *) See note. [HS]

Using the improved BetaInverse(), I get:
Excel's BetaInv(0.001,2,4)=     0.010101318359375
My     BetaInverse(0.001,2,4)=  0.0101017878834719
It can be seen that my BetaInverse(0.001,2,4) is much closer to R's qbeta(0.001,2,4) 0.01010178788373775407572 than Excel's BetaInv(0.001,2,4).

Thus, the new results for ElMaestro's df 5.123 example were refreshed as:
df      p    Excel 2003   OO Calc     R
5.123  0.000      #NA          #NA      -Inf
5.123  0.025  -2.55212890  -2.55212890  -2.55212890
5.123  0.500   0.00000006   0.00022634   0
5.123  0.975   2.55212890   2.55212890   2.55212890
5.123  1.000      #NA          #NA       Inf
0.000  0.975      #NA          #NA      NaN

Excel's calculation is performed with improved NctInv() containing improved BetaInverse()
Now, the bias has been greatly decreased. May be acceptable in most cases.


  • Multiple blanks and tabs are removed (that’s the standard behaviour of HTML). If you want to keep multiple blanks highlight the text and click on the Code button right of the text area. Suitable BBCodes will be inserted before/after the text and in HTML rendered in a monospaced font. However, tabs are still removed. For details see here. [Helmut]
yicaoting
★    

NanKing, China,
2011-10-03 09:23
(4583 d 03:31 ago)

@ Helmut
Posting: # 7407
Views: 14,186
 

 Spreadshit addiction

I am very glad to report my test of MS Excel 2010's BetaInv() Function.

Excel 2000, 20003, or 2007:
BETAINV(0.001,2,4)=0.010101318359375

Excel 2010:
BETAINV(0.001,2,4)=0.0101017878837378

SAS 9.1:
BETAINV(0.001,2,4)=0.0101017879

OO Calc 3.3
BETAINV(0.001,2,4)=0.0101017878837378

R 2.13.0
qbeta(0.001,2,4)
[1] 0.01010178788373775407572


Now, I am satisfied with MS's improvement on MS Excel 2010.
What about you?

But I want to ask HS, before you used user-defined function NctInv() function, how could you do Welch's t-test for 2-group parallel BE analysis with unequal variances in your "Manual Excel 2000" which was frequently appeared in your PPTs?

I am highly interest in your Excel template for BE analysis, would you like to share it with me for personal in-house use?
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-10-03 17:59
(4582 d 18:55 ago)

@ yicaoting
Posting: # 7414
Views: 14,099
 

 Spreadshit addiction

Dear yicaoting!

❝ Now, I am satisfied with MS's improvement on MS Excel 2010.

❝ What about you?


Sure. Took Micro$oft almost two decades (just checked Excel 5.0 [1993]: BETAINV(0.001,2,4)=0.010101318359375). Rome wasn’t built in a day. :-D
Would be interesting whether v2010’s TINV() still rounds down dfs to the next integer. Would you be so kind and check?

❝ But I want to ask HS, before you used user-defined function NctInv() function, how could you do Welch's t-test for 2-group parallel BE analysis with unequal variances in your "Manual Excel 2000" which was frequently appeared in your PPTs?


It’s easy to code it the usual way. In Gnumeric’s output you even get the formulas in the result cells. In my slides I got the t-value from outside Excel, since the dfs are wrong (as we know). That’s why I stated in my slides »Maybe it’s time to leave M$-Excel.«

❝ I am highly interest in your Excel template for BE analysis, would you like to share it with me for personal in-house use?


Hhm. I wrote that just for fun in 1993 in order to check Chow’s/Liu’s examples. Have to dig it out from the grave…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-10-03 19:35
(4582 d 17:19 ago)

@ Helmut
Posting: # 7416
Views: 14,051
 

 Spreadshit addiction

Dear HS!

❝ Would be interesting whether v2010’s TINV() still rounds down dfs to the next integer. Would you be so kind and check?


I am sorry to report that Excel 2010 still rounds down dfs to an integer.
Tinv(0.1,10)=1.8124611
Tinv(0.1,10.123)=1.8124611
Tinv(0.1,10.789)=1.8124611
Tinv(0.1,11)=1.7958848


❝ Hhm. I wrote that just for fun in 1993 in order to check Chow’s/Liu’s examples. Have to dig it out from the grave…


Waiting for your treasure....
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-10-03 19:41
(4582 d 17:13 ago)

@ yicaoting
Posting: # 7417
Views: 14,173
 

 Spreadshit addiction

Dear yicaoting!

❝ I am sorry to report that Excel 2010 still rounds down dfs to an integer.

Tinv(0.1,10)=1.8124611

Tinv(0.1,10.123)=1.8124611

Tinv(0.1,10.789)=1.8124611

Tinv(0.1,11)=1.7958848


Great. Professionals at work.

❝ ❝ Hhm. I wrote that just for fun in 1993 in order to check Chow’s/Liu’s examples. Have to dig it out from the grave…

❝ Waiting for your treasure....


You know how corpses look after that many years. :vomit:
Don’t expect too much! :PCchaos:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,988 posts in 4,825 threads, 1,657 registered users;
107 visitors (0 registered, 107 guests [including 8 identified bots]).
Forum time: 12:55 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5