Experimental setup, details [General Statistics]
Hi nobody,
Your idea is absolutely good. But there are complications...
Let us say I have a cattle prod, a long-haired Austrian, and a microphone that is rigged up to my computer for recording.
The cattle prod is a wonderful tool, it makes persuasion such an easy task. My cattle prod has a lot of knobs and buttons and sliders that allow me to tweak combinations of:
- Tesla coil zap modifier strength
- Evil discharge combobulator intensity
- Ion stream barbaric voltage gain
- Apocalyptic wolfram anode ray modulation
...and so forth.
I am zapping the Austrian is his sorry rear and recording the intensity (dB) of his immediate...shall we say.... verbal reaction. Let me add, each experiment takes no more than a few seconds but there is a bit of recovery time before the next experiment can be run and that isn't related to latency of the prod. Surprisingly, under the present experimental conditions my sensitive microphone also records a lot of cursing and swearing after each zap. I am not sure why or what the source is, but it is a matter I am looking into as I am not sure such behavioral excursions are good for my poor sensitive electronics.
Anyways, I know from the past that with the default prod settings there is a decent -though not overwhelmingly good- correlation of the reaction recorded in dB and the prod's output measured in GJ per zap. I have done that experiment with 16 evenly dispersed levels between 0 and 17500 GJ/zap and I got an r squared of 0.9583.
The question is if I can improve the correlation by playing a little around with the seetings mentioned above. So I acquired via Amazon an ACME Automated Experimental Prod Analyzer Kit (only $ 598 w. 2 years of warranty) so that I can automate the experiments comprising of changing a lot of setting, actuating the prod and recording the resulting reaction.
So I have generated 9629 datasets, some of which have 16 data points and some have 15, 14 or 13 because of missing values (in practice this means a relative lack of contact between the tip of the prod when actuated and the Austrian's butt).
I need to find the settings for which I am most likely to be able give a zap that triggers a 138 dB (+/- 0.1 dB) response. Hence I need the correlation to be as good as possible, including a correction for sample size. With 9629 datasets I cannot visually inspect every single set and determine the best fit. I must find a solution in R.
This is my grave predicament. "Help me noone kenobi, you are my only hope".
❝ Ehhm, whut? Maybe you are to fixated for statical testing? How about some oldfashioned "visual inspection of fit" paired with "back-calculated value of calibration samples from calibration function, including relative deviation"?
❝ Just saying
Your idea is absolutely good. But there are complications...
❝ No idea what you want to compare in the end, so...
Let us say I have a cattle prod, a long-haired Austrian, and a microphone that is rigged up to my computer for recording.
The cattle prod is a wonderful tool, it makes persuasion such an easy task. My cattle prod has a lot of knobs and buttons and sliders that allow me to tweak combinations of:
- Tesla coil zap modifier strength
- Evil discharge combobulator intensity
- Ion stream barbaric voltage gain
- Apocalyptic wolfram anode ray modulation
...and so forth.
I am zapping the Austrian is his sorry rear and recording the intensity (dB) of his immediate...shall we say.... verbal reaction. Let me add, each experiment takes no more than a few seconds but there is a bit of recovery time before the next experiment can be run and that isn't related to latency of the prod. Surprisingly, under the present experimental conditions my sensitive microphone also records a lot of cursing and swearing after each zap. I am not sure why or what the source is, but it is a matter I am looking into as I am not sure such behavioral excursions are good for my poor sensitive electronics.
Anyways, I know from the past that with the default prod settings there is a decent -though not overwhelmingly good- correlation of the reaction recorded in dB and the prod's output measured in GJ per zap. I have done that experiment with 16 evenly dispersed levels between 0 and 17500 GJ/zap and I got an r squared of 0.9583.
The question is if I can improve the correlation by playing a little around with the seetings mentioned above. So I acquired via Amazon an ACME Automated Experimental Prod Analyzer Kit (only $ 598 w. 2 years of warranty) so that I can automate the experiments comprising of changing a lot of setting, actuating the prod and recording the resulting reaction.
So I have generated 9629 datasets, some of which have 16 data points and some have 15, 14 or 13 because of missing values (in practice this means a relative lack of contact between the tip of the prod when actuated and the Austrian's butt).
I need to find the settings for which I am most likely to be able give a zap that triggers a 138 dB (+/- 0.1 dB) response. Hence I need the correlation to be as good as possible, including a correction for sample size. With 9629 datasets I cannot visually inspect every single set and determine the best fit. I must find a solution in R.
This is my grave predicament. "Help me noone kenobi, you are my only hope".
—
Pass or fail!
ElMaestro
Pass or fail!
ElMaestro
Complete thread:
- Goodness of fits: one model, different datasets ElMaestro 2017-10-06 23:01 [General Statistics]
- Goodness of fits: one model, different datasets nobody 2017-10-07 16:03
- Experimental setup, detailsElMaestro 2017-10-07 18:06
- Visualization ElMaestro 2017-10-07 19:07
- multiple regression? Helmut 2017-10-08 17:17
- just y=ax+b ElMaestro 2017-10-08 17:30
- just y=ax+b Helmut 2017-10-08 17:35
- just y=ax+b ElMaestro 2017-10-08 17:50
- just y=ax+b nobody 2017-10-08 20:26
- ANCOVA with R? yjlee168 2017-10-08 21:28
- just y=ax+b DavidManteigas 2017-10-09 10:34
- just y=ax+b nobody 2017-10-09 10:45
- just y=ax+b Helmut 2017-10-10 18:15
- just y=ax+b ElMaestro 2017-10-08 17:50
- just y=ax+b Helmut 2017-10-08 17:35
- just y=ax+b ElMaestro 2017-10-08 17:30
- Experimental setup, detailsElMaestro 2017-10-07 18:06
- Goodness of fits: one model, different datasets nobody 2017-10-07 16:03