Why Simple Models Are Better Than Complex In Finance, Climate, Coronadoom & More
Right after yesterday's post on (again) why rationalism and reason fail as philosophies, and on their severe limitations, I read an article profiling Gerd Gigerenzer and his "simple rules".
It repeats a lot of what I said in simpler language. This does not surprise me, as I draw upon Gigerenzer in Uncertainty, especially on people thinking what models are. He never uses the term Deadly Sin of Reification, but it's there in between the words.
Here's the succinct summary from the author of the piece:
There is a big difference between risk and uncertainty. You are dealing with risk when you know all the alternatives, outcomes and their probabilities. You are dealing with uncertainty when you don’t know all the alternatives, outcomes or their probabilities.
Like I said yesterday, in artificial situations like homework problems where the logic (which I called L) is specified in complete rigor, it is possible to calculate both probability and the best decision. The recognition that these are not the same must still be there, of course.
But reality when it is complex, as defined below, cannot be specified in anything like complete rigor. When people think they have is when the commit the Deadly Sin of Reification.
The mistakes rationalists and champions of reason make is to suppose that every problem is completely, or well enough, specified in rigor, so that they answers they supply are the "right" and "obvious" ones. This is why they label deviations from these right and obvious ones as "irrational". This is why they seek theories why this supposed irrationality exists.
Gigerenzer also recognizes this. See if this doesn't sound familiar: "The problem with complex models is not calculations - computers can do that pretty well and fast. The problem is that they would demand that you make estimations. And that’s where things go wrong."
And this:
One of Gigerenzer’s favorite examples is the modern portfolio theory, pioneered by Harry Markowitz back in the 1950s. Markowitz offered a mathematical framework to design your portfolio so you can maximise your returns for any given level of risk. His theory was elegant, is taught in finance courses in universities across the world, has finance professors swear by it, and won him a Nobel prize in 1990.
Someone else who understands the limitations of this approach, but in a different and more limited way, is Nassim Taleb.
Simplified, Taleb recognized the rigor in the old models, like from Markowitz, was in their insistence on normality. All the models were correct given the assumptions they made. Just as all the theories of Q followers are also correct given the assumptions they make.
Taleb's solution was to widen the expanse of models to consider beyond normals. This is why he's always on about "fat" or "heavy" tails. He uses models which assign more probability to events that are rarer than in normal models.
But he never went as far as Gigerenzer. Taleb, just like the older crew, also commits the Deadly Sin of Reification when he insists those heavy tails are real. The events themselves are real as can be. The probability never is.
Models which allow for greater variability do better than normality based ones in Finance. But they are, like all models, reifications. This is not bad in itself. It's only a sin when the model is taken as reality.
Taleb might have suspected his models were "the" truth because he personally made a bundle. But it's more likely he made money by having better intuition, better rules of thumb, about the markets he was meddling in. There is no substitute for experience.
None of this is to infer that models cannot be useful, or even excellent. They surely can. Like I always say, casinos make fortunes from terrific, well verified, but still reified, models. It is a model that says the probability of snake eyes is 1/36. A damned good model. But a model nonetheless when applied to the actual craps table. The dice will do what they do because of various causes, not because of probability or a model.
The Deadly Sin of Reification can be (and usually is) made at craps tables, but there it is at best a venial sin. It becomes mortal with the growth of complexity.
Anything involving groups of people involves complexity. Finance markets, global cooling, the spread of disease, politics, war and the like. It is there that experts sin. Not by creating models, which (I repeat) can be of some use. It is when experts insist their model is the model, or the only model. It is when they insist their decision is not just the right one, but the only possible decision.
It is when experts (or their blind fans) scream "Denier!" when their reification is questioned.
Perhaps the best definition of (midwit) expert is he who believes his model is reality. This is why we need to rescue ourselves from experts.
Finance Addendum The almost exception to this Finance. If all the majority traders are using the same suite of models, regardless whether this is known to them all, then the more these models capture the "model universe", the less reification there is.
This is close to the artificial situation where the events are described in complete rigor. And it is why markets work well---most of the time. But there's always that event that was outside the view of the models that proves that complete rigor is not to be had. The Deadly Sin of Reification steals your money.
Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here