Why You Need To Read This
My dear readers, a complex subject today, presented in the guise of a book review.
We are increasingly beset by lunatic psychotic sociopathic rulers wielding The Science like a club. Not always, but certainly most of the time, this The Science is bad science. So we must grasp how this bad The Science is created.
Some of the methods are easy to see, and we have gone through many. Some are subtle, and far from simple. But there is one method that is particularly beloved by almost all scientists, and which all think is dandy. That is the so-called gold standard Randomized Controlled Trial.
There is nothing in the world wrong, and everything right, with a controlled trial. But randomization is pure superstition, no different than cargo cult science, as I have explained in great detail in Uncertainty (scroll down here). And will explain here, too.
Randomization does nothing except give a false sense of certainty that cause has been proved in an experiment. Randomization is treated like a magic wand that is waved over data and blesses it. Which you might think is yet another hilarious joke, of the kind for which I am so famous. Alas, no. It is in earnest.
The book we are reviewing is The Tangle of Science: Reliability Beyond Method, Rigour, and Objectivity by Nancy Cartwright, Jeremy Hardie, Elenora Montuschi, Matthew Soleiman, and Ann Thresher. I do not like this book, but wanted to.
There is tremendous effort required on your part to follow this post. But follow it you must.
You can skip the Review and go right to RCTs.
Review
It is unlikely any book written by a committee will come off sounding other than that it was written by a committee. Alas, that is true here, too. Too much of the book reads like a transcript from a discussion group. I at times pictured a group of women taking turns giving their best concerned faces—you’ve seen them—saying “I feel…”, while somebody took notes.
The arguments are therefore not tight: there are too many words. The longest chapter of the book asks Wither Objectivity? As interesting a philosophical as that question might be, the reader is bludgeoned into not caring. Same with the chapter defining “rigorous”. Does it matter whether a piece of science is called rigorously objective or objectively rigorous? Maybe. But I simply could not love these topics when presented like this.
The “tangle” itself is obvious. Most measured things have lots of causes which operate under a plethora of changing conditions, so it’s damned hard, and even impossible, to keep track of it all in complex phenomena. Which everybody already knew. We’ll have more on this subject later, on what I call the causal chain.
That’s all I’m going to say of the book. Which makes this review unfair, but I needed to save space for its central mistake about RCTs.
Randomized Controlled Trials
I will prove to you that RCTs cannot prove cause. The authors of Tangled will also prove that, inadvertently, while trying to prove RCTs can prove cause.
First off, controlled trials can prove cause conditionally. If a scientist says, whether or not he believes it, “Here are the only possible causes of O (the outcome), and I shall control or account for all of them, including this new one, X.” Then if the uncertainty that O takes some value changes depending on X, then X might be a cause of O. And it is a cause of O, conditionally, if all those other causes do not in turn cause X in all circumstances.
Of course, the scientist might be wrong. His list of causes could contain lacuna, or it might have spurious entries. No matter: conditionally, his judgement is correct about X being a cause. Unconditionally he might be wrong.
With me so far?
Trials which are highly controlled, usually of the very small or common, are good at identifying cause. Which is why they are used so well and often in physics, chemistry and in everyday life. You shoot a slingshot at the window. It breaks. You caused it to break. One form of cause. The rock penetrating the window is another cause. Cause has aspects.
Your controlled experiment has proved cause. Conditionally. Because ackshually, somebody will chime in, “Yeah, well, an alien from Klygorg could have shot a secret space ray at the window at the same time! That could be the true cause.” Well, you can go on like that imagining causes forever. And if you’re short of material for a peer-reviewed publication, that’s what you do. The rest of us will spank you for breaking the window.
It was, therefore, obvious that control can prove cause, at least conditionally, on the belief that the thing under consideration is a cause. You see, I trust, the circularity. But it is not a vicious circle.
All right. Let’s quote Tangled on RCTs.
An RCT is an experiment that uses a correlation…between a treatment T and a later outcome O in a population to draw causal conclusions about T and O in that population…T may be correlated with a later O and yet not cause it if T is correlated with other factors that cause O. Such factors are called ‘confounders’. For the moment we call the next effect of the confounders C…
I’ll pass over the loose way of speaking of correlation (most think only linear). They miss that T may also be correlated with O if it has nothing in the world to do with it or any confounders.
Now say that T is orthogonal to C is the two are probabilistically independent: Prob (C|T) = Prob (T|C). At its simplest what an RCT does is try to ensure orthogonality between T and C in the population enrolled in the experiment.
This is not independence as it’s usually defined. The usual is this: Pr(CT|E) = Pr(C|E)Pr(T|E). Which is to say, knowing C tells you nothing about T, and knowing T tells you nothing about C, given some background evidence E. Which must be present; E, that is. There is no such thing as unconditional probability.
Their definition is odd. It’s the probability confounders are, what, operative? given that a treatment has been applied, which is supposed to equal the probability the treatment is applied given confounders have been, what, applied? This does not make any sense to what we want to know, which is how much of an effect, if any, T has, in the presence of C, given E.
We can’t know these probabilities anyway, because we cannot see the C! (Unlike sailors.) So, even if this criterion is right and proper, how can you know if Prob (C|T) = Prob (T|C)? Answer: you cannot.
What we want to know is if, in this trial, the confounder causes operated along with the treatment cause, or vice versa, or that only one set of causes worked, or etc. In other words, what we want to know is Pr (O | TCE) and Pr (O | T’CE), which is the probability the Outcome (takes some value) given the treatment is applied (T), in the presence of the Lord knows how many confounders C, and whatever background evidence E we have. (Yet we must take C as part of E, since we can’t see C.) Or the same probability but assuming the treatment is not applied (T’, using one way of writing “not-T”).
Here comes the bad move:
The first stage in trying to ensure orthogonality is random assignment.
My friends, random only means unknown, or unknown cause. So that random assignment means what caused a person to be assigned the treatment or control is not known. That’s it. Nothing more.
Random assignment is the opposite of control. So that randomized control is a sort of oxymoron. In reality, the control is acknowledgement that some causes, or potential causes, are already known or assumed, which is why the persons in, say, a medical trial are pre-separated by sex. Well, I mean in the old days, when medicine still acknowledged biological sex.
Here comes the magic wand:
The population is randomly assigned, half to the treatment group, where everyone receives it, and half to the control group, where no one receives it. Random assignment…ensures that T is orthogonal to all confounders at the point of assignment…
No it doesn’t.
In the group you assign the treatment you have no idea what the confounders are. If you did, you would control for them. They are confounders because you don’t know what causal powers they have. There is absolutely no guarantee, whatsoever, that your trial will have an equal split of confounding causes in your treatment and control groups. Further, since you have no idea what these confoungers are, there is no way to know what fraction of the list of confounders are in each group. For you do not know the list. If you did, you would control for them.
You don’t even know how many confounders there are: there may be one, there may be none, there may be plenty. Your treatment group may have all of one counfounder and none of all the others, and the control group may have none, some, or all.
“Randomization” does not, and cannot, produce an equal split. How could it, since you don’t know what the confounders are, or how many? Randomization does nothing to help you. Except take from experimenters the ability to assign people to groups. Which is not a bad thing, because as I say all scientists believe in confirmation bias, but they all believe it happens to the other guy. But you can get the same thing with blinding, and without the bad magical thinking of randomization.
Here, now, is the self-own, and admission that the magic wand is powerless:
Recall, however, that probability is an ‘infinite long run’ notion.
No, it isn’t. Unique, single, or finite propositions can be assigned probability. But let that pass.
Drum roll, with my emphasis:
This means that you shouldn’t expect an equal distribution of C factors amongst the T (i.e. treatment) and not-T (i.e. control) groups in any single randomisation of the study population, but rather that if you repeat the experiment, doing a random assignment again and again, over and over, on exactly the same population with exactly the same characteristics, the sequences of relative frequencies of Cs in T and of Cs in -T [not T, or the control] will converge to the same limit.
I’m sure I’m don’t have to tell you I do not give a damn about trials I have not done, but theoretically might, and that the trials that I did not do, cannot give power or knowledge to the one that I did do. Which is what “randomization” purports to do!
Whatever happens out at infinity is of no interest to what happens to my single experiment. I can’t do an infinite number of experiments. Notice there is no notion, and can be no notion, about rate of convergence, either. Leading to the Large Enough Fallacy: the belief that, say, 20 is close enough to infinity—when any number you can think of is infinitely far away from infinity.
And anyway, just what does their use of exactly mean? No, I’m asking. Stop and think of this question.
Did you stop and think? Admit it if you didn’t.
If a new trial is exactly the same as the old, then how in the unholy hell could there be more or fewer C in the new trial group as in the old trial group? The only way is if they are not the same! If the trials were exactly the same, then you’d get exactly the same answers every time. Ah, so how can they think they would be differences? Because of the superstition of “randomness”, which is allowed to vary in some mystical way, though everything else is exactly the same. Great nonsense.
See what I mean about given probability magical powers, or treating it like a superstition? If you do not see what I mean, you had better figure it out. The mixing up of probability and cause, or knowledge of cause, is rife in science. It is why so many “studies show” obvious falsities.
We finally understand why some think “randomization” can prove cause. It is only that old conditioning we met above. T is thought as a cause because T is conditionally thought as a cause. That’s it, and nothing more.
So ignore all the hoopla about “randomized” controlled trials. Each study stands or falls on the control it had, and not only the “randomization” given to it.
If you still haven’t grasped the key, I will follow this up with an article There Is No Such Thing As A Fair Coin.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
In the end, we see what we want to see.
Some people hide this so well that not even they realize it.
I think that you have misunderstood their gibberish with regards to the 'exactly' Briggs. I blame the gibberish not the decipherer. They intend to be talking about running the exact same experiment on the exact same group in the exact same etc. etc. but with different assignment between control and experimental arms. They don't say that because their brains are made of pudding. Just reading the little bit of their writing that you posted gave me a headache. Anyway, their 'exactly' duplicated tests with assignment to different groups theoretically would wash out confounders, but only assuming some nice distribution of confounders in the whole population, and assuming an effect on the same scale as the confounders and a bunch of other stuff that these ladies couldn't understand even if they actually tried to which they didn't and never will. But since we know that what they're proposing is impossible(was it Pyrrho who said you can't step into the same stream twice?), even supposing you could get the participants to participate in repeated runs of the same experiment they quite simply aren't the same people after Run N that they were before Run N.
The funny thing is that none of this matters when looking for large affects with suitably sensitive equipment. It's this insane min/maxing, trying to tease out a tiny effect and clone it to the moon that is boogering everything up. It comes from the Cartesian idea that the components are the sum of the whole and that you can kind of just resolve everything into component parts without changing it. That sort of thing invariably leads us into the maze of sorcery(in the Greek sorcery is literally pharmakeia which makes this quite apt) that has us all dazed and confused.