Researchers Claim “Climate Change” Causes Currency Vulnerability: An Instance Of Forgotten Uncertainties
Subscribe: Spotify
Listen to the podcast at YouTube, Bitchute, or Gab.
We know all about models by now, do we not, dear readers? A model, also known as a theory, can be made about any contingent thing. Say, your 401-K, or the fate of a currency, or the weather, or of the growth of academics researching “climate change”.
Because the model is of something contingent, also known as an observable, and because of things like limitations in data and, even more important, limitations in thought, there will always be, or should be always, uncertainty in the model’s predictions.
This is in no way a limitation or a fault. It is the nature of models.
But what kind of mistake would you be making if you took the output of your model and input it into another stripped of all uncertainty?
And what kind of mistake would the owner of the second model be making if he supplied his output, also stripped of uncertainty, to the creator of a third model?
It’s a trick question. The answer is: a dumb mistake.
We discussed this before under the label multiplication of uncertainties, but maybe a better title is forgotten uncertainties.
The idea is simple.
Model A says the chance of temperatures exceeding some high, critical value (at some place, etc.) is, being generous, 50%.
Model B says the chance of some high-temperature-dependent thing, like the production of a crop falling, is 60%.
Model C says the chance the GDP falls by some amount, if the crop is low, is 80%.
It would be a tremendous blunder to announce “Eighty percent chance the GDP falls by 6 basis points when climate change hits”, because the uncertainties have not been multiplied like they should have been.
In our example, the proper phrase is this: “The chance the temperature exceeds some value, and the crop production falls, and the GDP falls is 25%”, because 0.5 x 0.6 x 0.8 is about 0.25.
What seemed almost certain with one headline, has, with proper accounting for the full uncertainty, dropped to something not especially likely.
I was most generous with these probabilities, making the best case for those who tout “climate change”. In real models the complexity is much greater, the string of models and assumptions (assumptions are models) longer, and the resultant full uncertainty much greater.
So great is the uncertainty that it’s difficult to get worked up over any announcement of doom.
Let’s keep this in mind as we look at the peer-reviewed paper “Climate Change Vulnerability and Currency Returns” by Alexander Cheema-Fox and two others, in the Financial Analysts Journal.
They begin by stating only bad things that can happen with “climate change”. The good is forgotten, or never considered. On the other hand, try getting a paper published that says “Look at all these terrific things climate change will bring us.” I won’t wait up for you.
Incidentally, I put the scare quotes around “climate change” to indicate I haven’t any clear idea what they mean by the phrase. And neither do the authors. To them, as to many, it is a scare word, a bad thing, something to be avoided. Precision in the term is not only missing, but unwanted.
Our authors “use data from the Notre Dame Global Adaptation Initiative (ND-GAIN Index) that measures a country’s vulnerability to climate change.” This is a model, and though I won’t prove it here, an over-certain one. (Our hunger for over-certain single-number summaries of vast complexities is not limited to this field.)
Norway (at this writing) is given a 75.4. Not 75.3, or 75.5: 75.4. Denmark is 71.0.
This one single precise number summarizes a country’s “climate change” capability in terms of: (1) exposure, (2) sensitivity, (3) adaptive capacity, (4) economic readiness, (5) governance readiness, and (6) social readiness. The first three are, our authors say, “considers six life-supporting sectors including food, water, health, ecosystem services, human habitat, and infrastructure.”
One perfectly certain number. To summarize an entire culture and what it might do in an unknown future once “climate change” hits. What can it mean? The question is rhetorical. The index should be laughed at. Be sure, though, that you see that the first hidden model is that of “climate change” itself. We nowhere see it, and it is somehow taken as certain.
Back to our authors, who use the time-changing ND-GAIN indexes (or rather a form of them) to study currencies. Basically, they put a form of the ND-GAIN indexes into a third model (regression) with currencies from around the world (separated into G10 and non-G10 countries).
They then announce what happened to the parameters of that regression model—as if the parameters were the observables themselves. When what they should have done is “integrate out” the uncertainty in the parameters and stated the results in terms of uncertainty in the observables, i.e. the currencies.
In their favor, most researchers commit this error, not knowing better. But because the certainty in the parameters is necessarily always larger than in the observable, it means they are far too certain.
And that is not considering the multiplication of uncertainties, which appears nowhere in their paper. Even if they put their model in terms of observables, we still have to consider, at least, the “climate change” models, all the models (assumptions) that go into the ND-GAIN indexes, and the regression model.
Plus, I’m leaving out a lot of details about their models’ complexities (they do several models). The Notre Damn index “vulnerability” is a function of “Log GDP per capita”, “Price level”, “Cumulative current account”, “Debt-to-GDP”, “Unemployment”, “Industrial production”, “Retail sales”, some of which are known, but some of which are uncertain in themselves. But they are used as if they are certain.
Yet they conclude things like “we find trends in climate vulnerability predict currency returns.” There is no indication they are anything but certain about that conclusion, an impossibility.
If there is any predictive value to this model it comes not from “climate change” certainty, but because currencies change in time in modestly predictable ways (such as knowing you are from a G10 country). In essence, they have re-discovered time series modeling. Of course, time series models say nothing about cause—though they are often mistakenly thought to. That is a story for another day.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.
This is a good example of how studies and their data can be manipulated (tortured) until they give up the results you want. The output is junk science. Emphasis on junk, not science.
These last two articles were tremendously helpful, because the average lay person has been conditioned to just accept that any "scientific" study is authoritative; to be accepted without question. And, further, the average person is completely flat footed when it comes to understanding and articulating why scientific pronouncements - usually communicated to the average person via a misleadingly short and unnuanced click bait headline or uncritical fluff piece - do not have to be blindly accepted as Truth spoken by the voice of God.
This whole charade is quite effective, however, because the average reader is put in a conundrum when the next scientific truth is shallowly reported, because increasingly it seems that of the incredible daily volume of publication, only scientific papers useful to various agendas get reported. Thus, it's important that the constant drip of scientific "truths" do not escape scrutiny and surreptitiously become part of the foundation of truth on which new policy will be built. But who has the time or intelligence to determine whether each new scientific finding is sound or not? Even experts in the field can struggle to pick apart a well-designed but fraudulent scientific paper. The coronadoom charade proved this many times over, and, indeed, some frauds are so well constructed you would be a fool not to fall for it. And so the average Joe then must either care enough to find a trusted source to analyze these things (and good luck with that - as the even the fact-checking apparatus has become compromised, and further most people are not equipped to judge between competing experts) or perhaps just disregard scientific pronouncements without any basis whatsoever, which is its own special kind of error. There is a sort of dangerous hubris lurking behind the person who disregards positions outright for this or that seemingly ideological basis.
To put this conundrum differently, people are searching for a firm foundation. They are brainwashed in school and told constantly afterwards that the only truths that can be relied upon are scientific ones - particularly as it pertains to making public policy - and that no other way of knowing is valid, and yet intuitively people suspect there is some great fraud being perpetrated on them although few people have the ability to truly understand the exact nature of the fraud. Worse, some of the most vocal dissenters of the scientific fraud are themselves fraudulent in their own way, and so the people seem to reject one tyrant only to fall prey to a different kind of tyrannical charlatan. They reject the demonic game played out for 8 years under Obama only to run straight into the arms of their false messiah, Trump. They reject much of the coronadoom nonsense because they intuitively recognized the deception, but then fell prey to almost blindly accepting any and all covid contrarians. What a mess ensues when society sets about to deceive. Who to trust?