I think that you have misunderstood their gibberish with regards to the 'exactly' Briggs. I blame the gibberish not the decipherer. They intend to be talking about running the exact same experiment on the exact same group in the exact same etc. etc. but with different assignment between control and experimental arms. They don't say that because their brains are made of pudding. Just reading the little bit of their writing that you posted gave me a headache. Anyway, their 'exactly' duplicated tests with assignment to different groups theoretically would wash out confounders, but only assuming some nice distribution of confounders in the whole population, and assuming an effect on the same scale as the confounders and a bunch of other stuff that these ladies couldn't understand even if they actually tried to which they didn't and never will. But since we know that what they're proposing is impossible(was it Pyrrho who said you can't step into the same stream twice?), even supposing you could get the participants to participate in repeated runs of the same experiment they quite simply aren't the same people after Run N that they were before Run N.
The funny thing is that none of this matters when looking for large affects with suitably sensitive equipment. It's this insane min/maxing, trying to tease out a tiny effect and clone it to the moon that is boogering everything up. It comes from the Cartesian idea that the components are the sum of the whole and that you can kind of just resolve everything into component parts without changing it. That sort of thing invariably leads us into the maze of sorcery(in the Greek sorcery is literally pharmakeia which makes this quite apt) that has us all dazed and confused.
Having stumbled into the wrong classroom, what the heck:
You control for suspected confounders because you suspect them.
You can't control for unsuspected confounders precisely because you don't suspect them so you don't look for them when "randomly assigning" subjects (carefully controlling for confounders) into treatment and control groups.
Suppose you are testing a treatment for cats and you suspect coat color might confound the results. So you control for coat color. You ensure both the control group and treatment group have 10 white cats and 10 black cats each.
You "randomize" the assignment by coat color of cats into groups, say by pulling cat collar numbers out of a hat, but you do not also check under the tail and so you have no idea how many of the 10 white cats you put in the control group are female, and so on.
Pulling collar numbers out of a hat to control for coat color does not, cannot make the results of the treatment orthogonal to the sex of cats. And the suggested recourse is to imagine running the experiment infinite times so you can settle for running it once? Wait, what?
🗨 It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought. ~~ET Jaynes
Please forgive my ignorance, as I did not follow all the probability arguments.
Suppose 10% of people in the treatment group got a heart attack, and 5% in the control group did and this was a large randomized trial with 20k people in each group so there was statistical significance. What factor other than treatment could have caused the doubling of heart attacks? Randomization says that every factor other than treatment is spread evenly accross the groups.
Alas, "Randomization" does not say so, and cannot say so, because it has no idea how many other possible causes there can be.
To see that, try your argument again with the "randomization". Point is "randomizing" does nothing for you.
It all comes down the premise "the treatment can cause heart attacks", which is accepted conditionally. Without that, all you know is that some other cause exists (which caused the HAs in the control group). This is deduced.
I have a paper on this. Will post more on this subject, because I see I have caused confusion.
Instead of general explanations, please tell me what specifically could cause a doubling of heart attacks.
Suppose some factor other than treatment caused the extra heart attacks. Call it X. But we know that due to randomization and large numbers, X will have very similar prevalence in the control and treatment groups. So X can't explain this doubling of heart attacks. The only thing that is sufficently different between the groups is treatment. So the treatment caused it.
This is great, William. I think of "random" control trials for food additives - everyone's diet is so different, it's entirely possible side effects or lack thereof is caused only by other additives or interaction with them. A big problem.
Are they really using this Prob (C|T) = Prob (T|C) nonsense definition? By Bayes' theorem (and yes, ignoring the E that should always be present), this is equivalent to Prob (C) = Prob (T).
1. Real life scientific studies are rarely (if ever) that black-and-white in its results.
2. Correlation does not equal causation, as the old saying goes.
I don't doubt that there are institutional factors that play a part on why science is broken right now. But people do have a superstitious view of science, and it's been going on for quite some time. There's a reason why WMG likes to hawk on "reification" as a big problem.
The point is that there are a lot of factors going about even in a "lab-controlled" setting. Like you mentioned, even a coin toss isn't "random". And probability is simply an abstraction of real world factors that none of us can completely grasp because we are limited human beings.
If you want to say that science is not an absolute and that it involves a lot of guesswork, then I completely agree with you. We just have to be honest about it. And the problem goes beyond "we didn't science hard enough." It's a worldview problem which sees science as the means to determine what reality is instead of what it actually is: glorified guesswork.
Randomization cannot contribute to cause, because random, i.e. probability, is a matter of the mind. Like logic. Probability is what you deduce from given premises, like logic.
Only things which have power can be causal.
The causes can be mixed up, or distributed, using "randomization", but you have no idea how, and can't know. If you knew the causes, you know them, and would not "randomize".
In the end, we see what we want to see.
Some people hide this so well that not even they realize it.
Exactly so.
I think that you have misunderstood their gibberish with regards to the 'exactly' Briggs. I blame the gibberish not the decipherer. They intend to be talking about running the exact same experiment on the exact same group in the exact same etc. etc. but with different assignment between control and experimental arms. They don't say that because their brains are made of pudding. Just reading the little bit of their writing that you posted gave me a headache. Anyway, their 'exactly' duplicated tests with assignment to different groups theoretically would wash out confounders, but only assuming some nice distribution of confounders in the whole population, and assuming an effect on the same scale as the confounders and a bunch of other stuff that these ladies couldn't understand even if they actually tried to which they didn't and never will. But since we know that what they're proposing is impossible(was it Pyrrho who said you can't step into the same stream twice?), even supposing you could get the participants to participate in repeated runs of the same experiment they quite simply aren't the same people after Run N that they were before Run N.
The funny thing is that none of this matters when looking for large affects with suitably sensitive equipment. It's this insane min/maxing, trying to tease out a tiny effect and clone it to the moon that is boogering everything up. It comes from the Cartesian idea that the components are the sum of the whole and that you can kind of just resolve everything into component parts without changing it. That sort of thing invariably leads us into the maze of sorcery(in the Greek sorcery is literally pharmakeia which makes this quite apt) that has us all dazed and confused.
Having stumbled into the wrong classroom, what the heck:
You control for suspected confounders because you suspect them.
You can't control for unsuspected confounders precisely because you don't suspect them so you don't look for them when "randomly assigning" subjects (carefully controlling for confounders) into treatment and control groups.
Suppose you are testing a treatment for cats and you suspect coat color might confound the results. So you control for coat color. You ensure both the control group and treatment group have 10 white cats and 10 black cats each.
You "randomize" the assignment by coat color of cats into groups, say by pulling cat collar numbers out of a hat, but you do not also check under the tail and so you have no idea how many of the 10 white cats you put in the control group are female, and so on.
Pulling collar numbers out of a hat to control for coat color does not, cannot make the results of the treatment orthogonal to the sex of cats. And the suggested recourse is to imagine running the experiment infinite times so you can settle for running it once? Wait, what?
“Wither Objectivity” is an apt phrase
https://www.ahdictionary.com/word/search.html?q=wither&submit.x=2034&submit.y=213
🗨 It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought. ~~ET Jaynes
¯\_(ツ)_/¯
Yes, a favorite quote. Look up gremlins on my site.
wmbriggs.com/?s=gremlins+ <-- Your site looks like heavily gremlin-infested 🤭
--
PS Once is happenstance (wmbriggs.substack.com/p/the-wokes-nice-book-burning/comment/14927658). Twice is coincidence. I’ll be on the lookout for enemy action 😂
Unfortunately, in some cases "requires more thought" should be read as "requires one to be nearly omniscient".
Thought gradations aplenty well short of 'nearly omniscient' 😏
💬 infinitely far away from infinity
Being able to explicitly control for every relevant factor is actually rather close to "nearly omniscient".
Who argues *that*‽ 😇
Well, the OP for starters.
Well, then I'm not sure we read the same OP 🤷
Please forgive my ignorance, as I did not follow all the probability arguments.
Suppose 10% of people in the treatment group got a heart attack, and 5% in the control group did and this was a large randomized trial with 20k people in each group so there was statistical significance. What factor other than treatment could have caused the doubling of heart attacks? Randomization says that every factor other than treatment is spread evenly accross the groups.
Alas, "Randomization" does not say so, and cannot say so, because it has no idea how many other possible causes there can be.
To see that, try your argument again with the "randomization". Point is "randomizing" does nothing for you.
It all comes down the premise "the treatment can cause heart attacks", which is accepted conditionally. Without that, all you know is that some other cause exists (which caused the HAs in the control group). This is deduced.
I have a paper on this. Will post more on this subject, because I see I have caused confusion.
Instead of general explanations, please tell me what specifically could cause a doubling of heart attacks.
Suppose some factor other than treatment caused the extra heart attacks. Call it X. But we know that due to randomization and large numbers, X will have very similar prevalence in the control and treatment groups. So X can't explain this doubling of heart attacks. The only thing that is sufficently different between the groups is treatment. So the treatment caused it.
Quick answer. I see I have to explain better.
Your thinking goes toward cause, as it should, and it is best.
But I ask to think of the same experiment with no "randomization". And rethink about the example of the premise "This is a cause."
I have to do a better job explaining this.
This is great, William. I think of "random" control trials for food additives - everyone's diet is so different, it's entirely possible side effects or lack thereof is caused only by other additives or interaction with them. A big problem.
Are they really using this Prob (C|T) = Prob (T|C) nonsense definition? By Bayes' theorem (and yes, ignoring the E that should always be present), this is equivalent to Prob (C) = Prob (T).
Yes, an exact quote.
Two things regarding your Gatorade scenario:
1. Real life scientific studies are rarely (if ever) that black-and-white in its results.
2. Correlation does not equal causation, as the old saying goes.
I don't doubt that there are institutional factors that play a part on why science is broken right now. But people do have a superstitious view of science, and it's been going on for quite some time. There's a reason why WMG likes to hawk on "reification" as a big problem.
The point is that there are a lot of factors going about even in a "lab-controlled" setting. Like you mentioned, even a coin toss isn't "random". And probability is simply an abstraction of real world factors that none of us can completely grasp because we are limited human beings.
If you want to say that science is not an absolute and that it involves a lot of guesswork, then I completely agree with you. We just have to be honest about it. And the problem goes beyond "we didn't science hard enough." It's a worldview problem which sees science as the means to determine what reality is instead of what it actually is: glorified guesswork.
Randomization cannot contribute to cause, because random, i.e. probability, is a matter of the mind. Like logic. Probability is what you deduce from given premises, like logic.
Only things which have power can be causal.
The causes can be mixed up, or distributed, using "randomization", but you have no idea how, and can't know. If you knew the causes, you know them, and would not "randomize".
See more at: https://www.wmbriggs.com/class/