Why (Scott Alexander's) Bayesian Rationality Fails
Scott Alexander, late of Slate Star Codex, and New York Times doxee, is the subject of Curtis Yarvin nee Moldbug's latest.
Rather, Alexander's (and Moldbug's? see postscript) Bayesian rationality is. The rationality is there in the link buried amidst 20,000-plus words.
Alexander, a wordy bugger himself, apparently used to have this slogan at his old blog:
"P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary."
This is Bayes's formula, written, as most write it, badly, and in a way that leads to the Rationality Fallacy, which is the belief that rational thought is the highest form of thought. The commentary-quip is just plain false.
First, Bayes is a dumb formula, where by dumb I mean the old fashioned word. It does not speak; it contains no intelligence. It takes input, churns it, and spits out a number, just like a calculator. Calculators aren't responsible for what you put into them, and neither is this formula. The highest form of thought is not calculation.
Second, Bayes isn't even needed. It's a nice tool, but nothing exciting. I don't say jettison it, but for the sake of sanity, don't worship it.
All probability fits this schema:
Pr(Y|X).
The uncertainty in Y is given by what you say in X. Everything you believe or assume, at this moment, everything that is probative of Y goes into X. If you know X_1 now and later learn X_2, then X = X_1 + X_2.
You can calculate both Pr(Y|X = X_1) and Pr(Y|X = X_1 + X_2). You can use Bayes to help in the calculation, or just jump to Pr(Y|X) (as in the link above), or you could employ one of dozens of other formulas depending on the problem's complexity. Bayes is not about "updating" per se; it's just a probability calculation.
Or you could use no formula and wing it, since almost all uncertainties are not quantifiable.
All these formulas are just that: formulas. The formulas are rational, yes. But all the magic comes in specifying the Y and X. And that's mostly non-rational; in some cases wholly unrational. Yarvin uses this word in distinction to irrational, but more as a pejorative than a solution, which it is.
The Y is easy enough: it's the thing you want to know about, suitably, or not, defined. The X is more mysterious. Hold that for the moment.
Here's why Alexander's way of writing probability is wrong, which helps explain why rationality is false god.
You can't ever have a Pr(A) (or Pr(Y)). There is always at least an implicit X lurking, so that the real notation Pr(A|X). This isn't just being pedantic. It really does show why rationalism is a failed philosophy.
Yarvin picks as one of his examples Y = "Obama's birth certificate is fake". He then gather two or three boatloads of words to cram into his X, and derives, using Bayes, his Pr(Y|X). Then he says (among much hemming and hawing) this represents the rational approach to evidence.
I've never brought myself to care about Obama's origins. My take was always that if he was shown to be foreign born, Congress would have "discovered" an obscure codicil in some forgotten law that explained how Kenya in 1961 was "really" part of the USA. Or whatever.
That's part of my X. Not Yarvin's. Bayes is silent on both of our Xs. Yet we can each rationally quantify our Y accepting our Xs, then by following the formulas. Following (the proper, of course) formulas is rational thinking, which is a good thing.
Rationality is a real thing. It says, "Use these formulas, from math and logic, in this way without error." That's fine and sure advice. But---here's the big but---rationality is silent on what goes into the formulas. What goes in is what's it's all about.
There is no way to bootstrap rationality. All thought has to begin in inspiration, sometimes called intuition, of which there are many kinds, none of them rational.
Here's a prime example of non-rational thinking with which everybody agrees. The law of noncontradiction. You can't have Y and ~Y (using Yarvin's notation for not-Y, which is popular) true at the same instant. In our extended notation, it cannot be that Pr(A|B) = Pr(~A|B) = 1 (but, for geeks, we can have Pr(A|B) = Pr(~A|C) = 1 as long as B ≠ C).
There's no rational way to prove noncontradiction. There's no formula to get to it. You either believe it, or not. If we let noncontradiction be our Y, the only possible X is, in effect, "Y is obviously true". The formula Pr(Y|Y is obviously true) = 1 works fine, and is rational. Yet there's a lot more to thought than the blind following of formulas.
There's also blind faith.
No universal---and we all believe scads of them, and need to---is capable of rational proof. No formula gets you to one. Except maybe in math, which does have limit formulas, though all of these ultimately rely on unrational or non-rational propositions.
Bayes as philosophy fails. It isn't so much it fails because it's dumb, but because it's a way to disguise necessary unrational thinking and falsely label it rational. In every situation we must answer, which X for this Y and why? Rationalists pretend the X they have picked is rational, the only "obvious" choice. Unless you agree, you're "being irrational".
Then there is the profound difference between probability and decision. They are not the same. No Pr(Y|X) compels any action or decision. Your decision depends on what matters to you, not to me, even if we agree on Y and X. (I have more on this, coincidentally, later this week.)
There are some decision formulas, but they are just like Bayes. They are dumb and only work on what you supply; all require the unrational belief that this decision formula is the "best" or "proper" one. All morals and ethics are unrationally based.
Rationality fails in decision making because a lot of what goes into these decisions is imponderable, like universals were in probability, and variable, like the Xs themselves.
Rationalists say their loss function is the rational choice. You hear this every time somebody wants to force an action because that's what "the Science says." If you disagree, you're a "denier".
Moldbug and Alexander even give a good example here, but fail to connect it all up. Government officials adopt silly and harmful coronadoom policies, they say, whereas bloggers (ahem) beat them in predictions. This is because officials aren't trying to just get the doom right, like bloggers: they're trying to do power right, too. The two decisions aren't the same.
Postscript Moldbug seems to be on my side here, at times, and also Alexander's. He never commits. Yarvin thus reminds me of the rabbi in Hail, Ceasar!, in the scene where priests and a rabbi are asked to examine a movie studio's new Christ-centered epic. The rabbi cavils, disputes, interjects, argues. Yet when asked his decision, says, "Eh, I haven't an opinion."
Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here