The word to determine is ambiguous. It can mean ascertain or cause. The latter implies the former, but not vice versa. Knowing the full cause of things is our goal; not just the efficient cause.
Uncertainty & Probability Theory: The Logic of Science
Video
Links: YouTube * Twitter – X * Rumble * Bitchute * Class Page * Jaynes Book * Uncertainty
HOMEWORK: Given below; see end of lecture.
Lecture
This is an excerpt from Chapter 7 of Uncertainty.
A causal model is a collection of premises from which are deduced a set of propositions that are certainly true or false and where we have an understanding of the powers and essences of the objects considered. Without understanding of the powers and essences, the model can be no better than deterministic. For example, given the model “At time t the object will be red, else blue” the proposition “It is not time t and the object is red” is false. The object may be red at times other than t, and, if so, the model is falsified. But this is not a causal model, because we don’t have any understanding of the nature or essence of the color change. The premises are enough, however, to determine the change. We must be ever on guard of the analogical nature of the word determine. To make the model causal, we’d have to add the “why” of the color change.
The model may, of course, be perfectly predictive but that does not make the model causal. Why? Notice carefully that there is nothing in the model that describes the efficient cause of the object changing colors.
There are four types or kinds of causes: formal, material, efficient, and final. We can make the model causal by grasping the nature of the object and powers of the thing bringing about the (secondary) cause. The object of which we speak must have a certain form. Say, a red lollipop. The form of this sucker, as we called them in Detroit, is constructed of some material, usually sugar, chemical coloring and flavoring. Some thing took these materials and put them into the form we see; this might be an assembly line with its associated machinery. This machinery was the efficient cause. In the end, we eat the thing, which is its purpose or end; rather, the end was the creation of an edible piece of candy.
In the first example (a very weak model) there are no premises about the object changing form other than its color: it may change shape as well, but only to the extent that we still recognize the object as the same object. It’s not clear whether the color change will be the addition of new material (say, ink), the subtraction of another (perhaps by sun bleaching), or because it was the nature of the object to be red at times or blue at times, a change triggered by who knows what efficient cause.
A simple model is “If X then Y”. When somebody asks “Why Y”? The indubitable answer is “Because X.” This model is as simplistic as can be, but it is not trivial or empty. All our knowledge provided inductively, like axioms, are given by this form, where, of course, X might be a compound statement. On the other hand, these are not truly causal models: these are not even explanations. Why is or how is it that the (say) principle of non-contradiction is true? We cannot say. It just is. We can say that we know it is true given our scant observations via induction. But that is an epistemological explanation and not a causal explanation. We cannot know why or how something that is necessarily true is necessarily true. That kind of understanding, like knowing the full nature of the first cause, is closed off to us. There are some things we must accept on faith.
True causal models instead relate to secondary causes, of the type mentioned above [in Class 41]. Y = “The vase is in shards.” Why Y? Because X = “The ball hit it,” where X is shorthand for the forces we know to be responsible. This, too, is a weak model because it only applied to this ball and this vase. But we can broaden it to all balls and all vases under specified conditions. This can be informal, as nearly all of our causal models are, or formalized with mathematics. The danger with mathematics, as ever, is the Deadly Sin of Reification, when we give life to the equations and forget they represent real objects. Also, we cannot mathematize all parts of most real-life events, we can only create abstractions from them. Reification happens, all too often, when we forget that our creations are abstractions and not reality.
A fuller example. The equation for the height y of a projectile is given as
where θ is the initial angle of the projectile launched with initial velocity v0, gravitational acceleration g, and the distance from the (arbitrary) origin of the throw x . This is a deterministic model. It says that given g, v0, and x, y will be such-and-such a value with certainty. But it doesn’t say what causes y; it only says the value of y is determined, i.e. made known, because of these other things. To understand the efficient cause of y, we must go deeper. The nature of the “projectile” is vague enough, but we understand that it is the nature or power of the impelling force, and of the other forces, to cause the projectile to scuttle along. It could be a coincidence that this equation, which yields perfectly reasonable, or even completely accurate, predictions, is itself the result of other causes that are the real cause of y. We only know to stop this kind of thinking after induction tells us we have understood the essence or nature of the situation.
The premises of this model are explicit and not subject to “fuzzy” interpretation. One premise is “This precise single fixed g”, not “This g more or less.” The same is true of the proposition itself: y will be this and only this value, not this value plus-or-minus or more-or-less. Add “close enough” conditions to the premises and the model retains its deterministic status; adding them to the proposition of interest turns the model from deterministic to (perhaps partly) probabilistic. Many deterministic models are treated probabilistically when the propositions of interest are observed to be false. If, given the premises, the projectile is not precisely at y, but at y more-or-less, it is usually thought that the main model is doing “most of the causing” and therefore there must exist unobserved or unnoticed, or at least unconsidered, causes that also operate on the projectile which are not in the model. This is fair enough when the nature or essences of the main causes are certain, but we really have a probabilistic model.
Finally, if we do understand the cause of some thing, we don’t need a model or experiment. Why? Because we know the cause! We only need deterministic models (like the projectile equation) to understand the extent of a cause in a situation which specified conditions. This is, of course, a trivial observation, but it will have an important sequel.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
I have finally found it, the smoking gun:
"Use of excess mortality associated with the COVID-19 epidemic as an epidemiological surveillance strategy - preliminary results of the evaluation of six Brazilian capitals" - André Ricardo Ribas Freitas et.al. medRx preprint. - May 12, 2020
This is the basic outline for the EXCESS MORTALITY = COVID = EXCESS MORTALITY circular model, of "epidemiological integrity".
Now I can retire.
Ah, why is Y where y is? That is the question.