Video
Links:
Next week we finally begin probability. Let there be joy!
HOMEWORK: Read. I mean it. Read the chapter below.
Lecture
This is Chapter 3 of Uncertainty.All the references have been removed.The dollar signs around text indicate equations I didn’t have time to convert.
“[W]hen Mr Wells says (as he did somewhere), `All chairs are quite different,’ he utters not merely a misstatement, but a contradiction in terms. If all chairs were quite different, you could not call them `all chairs.'”—G.K. Chesterton
There is no knowledge more certain than that provided by induction. Without induction, no argument could, as they say, get off the ground floor; this is because induction provides that ground floor. No argument could even be phrased if it were not for induction, because phrasing requires language and language requires induction. When we say apple, we know it applies to all apples.
All arguments must trace eventually back to some foundation. This foundational knowledge is first present in the senses. Through noesis or intellection, i.e. induction, first principles, universals, and essences are discovered. Induction is what accounts for our being certain, after observing only a finite number of instances or even one and sometimes even none, that all flames are hot, that all men are mortal, that white is always white, that for all natural numbers $x$ and $y$, if $x = y$, then $y = x$, and for the content and characteristics of all other universals and axioms. Because we know these indubitable propositions more surely than any other, induction produces greater certainty than deduction.
Arguments are not valid because of their form. It is not because of barbara (a common syllogistic form) that because all men are mortal and Socrates is a man that we deduce Socrates is mortal. That conclusion is made obvious to us by observing the congruence of all the propositions in the argument, and it is induction that tells us the major premise is true and it tells us that the congruence provides or gives proof. But induction only works for telling us what we know and not why a necessary truth is caused to be true. Indeed, the standard story is backward: barbara and other syllogistic forms are discovered from instances like “All men…” via induction. Induction provides the certainty that, given these premises, the proposition or conclusion is true. Just as it provides the certainty that the probability of “George wears a hat” is 50\% given “Half of all Martians wear hats and George is a Martian”.
Mistakes in induction occur, as they do in every area of intellectual activity. When a man sees several white swans and reasons, “All swans are white”, he is proved wrong when a black swan in sighted (as in Australia). Why he is wrong is explored below. When a poll, asked of one set of people, is enlisted to “work” on a second set of folks who differ in those characteristics responsible for first set’s answers on the poll, again induction fails. But statistical induction, of which polls are instances, is only one kind of induction. Induction is of five different kinds, some more and some less strong.
Since at least Hume it has been fashionable to pretend mystification about why induction is “justified” or to claim that it is not. Hume said, “We have no reason to believe any proposition about the unobserved even after experience!”. Howson & Urbach, for instance, (p. 4) say that there is no “solution” to induction and that this sad fate “is no longer controversial.” Speaking of induction, Karl Popper asked, “Are we rationally justified in reasoning from repeated instances of which we have experience [like the hot flames] to instances of which we have had no experience [this flame]?” His answer: “No”. He also said, “The truth of any scientific theory is exactly as improbable, both a priori and in relation to any possible evidence, as the truth of a self-contradictory proposition” (i.e. It is impossible).
Fisher, though not of the same skeptical bent—he often talked about how scientists used inductive reasoning, though he wasn’t always entirely clear by what he meant by “inductive”—agreed in principle with Popperian ideas and used these beliefs to build his system of statistics. Theories (propositions) could only be “rejected” and never verified and so on. Popperian skepticism is common in statistics. For example, a well known statistician on his blog wrote “[I]nduction doesn’t fit my understanding of scientific (or social scientific) inference”; also see \cite{Gel2011 for the standard limited view of induction in statistics. As we will see, such skepticism is unwarranted.
Metaphysics
Why a section on metaphysics in a book about probability? Because probability, like the philosophy of anything else, must rest on some ground. That ground is our knowledge and understanding of essence, substantial forms, intellection, on the quiddity (the whatness of things) and (in the old way of speaking) induction.
Material things are composed of matter (or matter and, equivalently, energy) and form, and the study of such is called hylemorphism. A simplistic example: the same lump of clay may be fashioned into a vase, an ashtray, a fanciful backscratcher, or many other things, each of a different form. A substantial form of an ashtray would be those elements, or rather arrangements of matter, that make the clay an ashtray and not, say, a backscratcher. We might say an ashtray has a bottom to collect ash, and so a sculpture of a donut made of clay would lack the substantial form of an ashtray. An accidental form is something part of the substantial form which is not essential. For instance, the ashtray can be an ashtray with a carved initials or without. The initials are a form, but their absence would not remove the substantial form of the ashtray.
The essence of a thing is its nature, its whatness. Everybody knows that a chair is not a house nor is water mercury nor are men chickens, except metaphorically. But even metaphors needs induction to work. Everybody, even non-scientists, knows there are differences between kinds of things because they understand the essence of different objects—chairs, houses, water, mecury, men, chickens—and they know at least partically the reasons behind that which makes them what they are, even if they do not comprehend the totality of their essences. Nor need any man understand all there is to know of a thing to understand part of its essence. That things have natures and the study of their substantial forms is called essentialism. The best book for readers of this volume is David Oderberg’s Real Essentialism, where these matters are defined and defended at length and (I think) conclusively.
All scientists and all users of models of uncertainty take essentialism for granted. Whenever “data” are collected, they are collected on like objects, objects which have the same essence (or are thought to). A doctor testing a new pill collects measurements on people, and not people, ferns, and Buicks. A physicist measuring heat ensures to the best of his ability that the apparatus and experimental milieu remain the same or similar for each iteration. The essence of interest for the doctor, though this is usually unacknowledged, is human beings. He knows what is a human being and what isn’t. And similarly for any controlled experiment: essences are a given. This is not to say that mistakes in identifying essences aren’t made. Finger bowls are drunk from. But there can’t be accidents like these without essences (this will turn out to be a good pun). Science is (a weaker pun) the essence of discovering essences.
Deduction assumes essence. If we accept “All men are mortal” and “Socrates is a man”, it is deduced that “Socrates is mortal” because we know that Socrates, being a man, shares in the essence of men. There is more to arguments than that, of course, because we also have to understand mortal, all, is, and so forth. All of these words, some of which describe essences and some other things, are known inductively.
To expand on the examples above, the essence of thing is not merely a collection of its properties. The lumberyard, which contains all the necessary components for a house, is not a house. We need to marry the material of the house with its substantial form for the house to come to be. An accident is a property that does not necessarily have to belong to an object for it to retain its essence. Houses can be white or green; a house’s color is an accident; that is has to have a color is part of its essence. Humans can wear shorts, dresses, pants, or nothing; clothing is an accident. Everybody can grasp that this is a house or that is a human regardless of the accident of color or clothing. It is a necessary or essential property of houses to have roofs and for humans to have hearts, even “artificial” ones (the word acknowledges essence). It is the essential properties of a thing that define the thing. A roof-less house would not be a house but a shell without a cover; likewise a human without a heart is not human but a corpse. Suppose a house is under construction and lacks a roof because of a hurricane. That some house-like objects do not have roofs does not imply that houses don’t have to have roofs to be houses, or that the essence of a house is impossible to define. A three-legged dog which has ignored the advice about playing in traffic is not a stool. Everybody knows that four legs are part of the essence of being a dog. Scientists who study dogs and those who breed them know more about what is essentially a dog than most people, but again the entirety of an essence does not have to be known. People can tell the difference between dogs and cats and kumquats. Some dogs are missing legs and some are missing ears, just as some are black and others brown. These are all accidents. We come to know the essence or nature of a thing by recognizing its essential properties and not by its accidents.
This too is acknowledged by the experimenter. The doctor trying out a new prostate treatment gives it to many men. Each of these men have the same essence—they are men!—but they differ in accidental ways. Some are taller, some shorter, some have gene variant $G_1$ others variant $G_2$, some grew up in this region and others that, some smoke and others don’t, and so on. It is not that accidents are ignorable. In the doctor’s experiment, many accidents to being a man will be measured because it is thought or conjectured that the accidents, or some combination of them, interact differently with the causal powers of the treatment. I take up this again in the Chapter on causality, but it should be obvious that if there were no accidents between (or in) the men and the experimental protocol was identical for each, then the outcome would be of the same nature for each; where the “same nature” might not mean “identical” but could, but where it does mean “in a known stated range”. This will make more sense when discussing causality and quantum mechanical events. If there were no accidents but the protocol varied between men, however slightly, then outcomes could be different.
Types of Induction
One reason induction is widely misunderstood, even considered a “problem” in the academic sense, is because it is analogical. Mistaking one use of induction for another is equivocation, and, as I stated, equivocation makes the best jokes (Two cannibals are eating a clown and one says to the other, “Does this taste funny to you?”) but the worst fallacies. As with all analogical terms, sometimes it is harmless to leave a word unadorned and sometimes it is not. In this section, and elsewhere when needed, I hyphenate the word to indicate which type of induction is meant.
In this chapter I largely follow Louis Groarke’s wonderful An Aristotelian Account of Induction, which is must reading, especially given the controversy over this topic. Groarke’s work is largely unknown in probability and statistics, but it shouldn’t be. There is no way to adequately summarize the entire work, which is long and deep. Only a few highlights sufficient to dispel the sense that induction is problematic are given here.
“The goal of induction,” Groarke tells us, “is not simply to prove that something is the case but to provoke an understanding of the general case.” We here and elsewhere follow the dictum nihil in intellectu nisi prius in sensu, “there is nothing in the intellect which is not first in the senses.” Our senses tells us what is the case. Induction moves from the particularities collected by the senses, and moves to unobservable, unsensible generalities or universals, such as knowledge of a thing’s essence. Induction starts with the finite and progresses to the infinite; so although we can never entirely grasp the infinte, we can and even must know part of it. Induction, Aristotle’s epagoge, comes in different flavors; at least five. There is no appreciation of this that I have ever seen in the uncertainty, probability, or statistics literature, except in the works of David Stove, David Williams, and a few others. But, as we shall see, even they only “justified”—by which I mean clarified what was already true: no human can “justify” any necessary truth—one form of induction, i.e. induction-probability, which most people already believed unproblematic. Certainly all probabilists and statisticians acted as if induction-probability were useful.
According to Groarke’s view, induction is “the cognitive/psychological mechanism that produces the leap of insight” necessary for all understanding. He gives five flavors, aspects, or facets of induction. These are (in my modified terms more useful for probability) (1) induction-intellection, (2) induction-intuition, ; (3) induction-argument, (4) induction-analogy, and (5) the most familiar induction-probability. The order is from that which provides the most to the least certainty.
Induction-intellection is “induction proper” or “strict induction”. It is that which takes “data” from our limited, finite senses and provides “the most basic principles of reason.” Senses provide information of the here-and-now (or there-and-then), but induction-intellection tells us what is always true everywhere and everywhen. We move with certainty from the particular to the general, from the finite to the infinite. Without this kind of induction, no argument can ever get anywhere, no argument can ever even start; without it language would not be possible. Induction-intellection “Operates through infallible exercise of [nous], through the activity of intellection, understanding, comprehension, insight.” It produces “Abstraction of necessary concepts, definitions, essences, necessary attributes, first principles, natural facts, moral principles.” In this way, induction is a superior form of reason than mere deduction, which is something almost mechanical, and can be done on a mindless computer. Induction-intellection is instantaneous learning, it proceeds by “flashes” of insight. (How this happens is not a question here answered; that it happens is indubitable.) Intellection-inductions are not found in the slogging labor of mechanically working out consequences of accepted premises, like deductive reasoning is. The knowledge provided by induction-intellection comes complete and cannot be deducedl it is the surest knowledge we have. Numbers come from this form of induction. We see one apple, two apples, three. And then comes 1, 2, 3, … Deduction has much to say about that “…”, but knowing that we can reason deductively comes from this form of induction. See Groarke’s paper for a discussion of how induction works (or might work), a topic I do not even here broach.
Induction-intuition is similar to induction-intellection. It “operates through cleverness, a general power of discernment of shrewdness” and provides knowledge of “any likeness or similitude, the general notion of belonging to a class, any discernment of sameness or unity.” Axioms arise from this form of induction. Axioms are of course the ground of all mathematical reasoning. We have to be careful because some use the word axiom loosely, and merely mean assumption, a proposition which is not necessarily believed but is desirable: thus, by axiom I mean those base propositions which are fundamental and believed by all those who consider them (like Peano’s axioms, etc.). This fits with work like Goedel’s, who proved, as it were, that axiomatizing never stops, that induction-intuition must always be present, that not all things can be proved by reason alone. We don’t know that syllogisms, for instance, provide validity except first by knowing via induction-intuition that syllogisms are deductive. The foundational rules of logic are provided to us by this form of induction. These rules do not occur to all—not everybody comprehends every truth, as is clear in any prolonged contact with student—but that rules are grasped (if they are) is done via induction and not reason. We observe that our mom is now in this room and now she isn’t, and from that induce the principle of non-contradiction, which cannot be proven any other way. If mom is here, she can’t not be here. Therefore, induction says, if a thing exists it can’t simultaneously not-exist (our knowledge about thing’s existence can be indefinite, of course). No universal can be known except inductively because nobody can ever sense every thing. Language exists, and works, because induction-intuition.
Induction-argument, given by inductive syllogisms, is the “most rigorous form of inductive inference” and provides knowledge of “Essential or necessary properties or principles (including moral knowledge)”. The forecaster on television announces E = “It’s probably going to rain tomorrow” and we’re interested in the conclusion Y = “It will rain tomorrow”. Y does not follow validly from E, but can can induce Y given the tacit premise P = “All the times I heard this guy on TV say it will rain he’s been right”. Y is now certain if we accept this additional premise. But that is a weak example. A better is when a physicist declines to perform an experiment on electron number 2 because he has already performed the experiment on electron number 1, and he claims all electrons are identical. Induction-argument can provide conditional certainty, i.e. conditional truth.
Induction-analogy is the least rigorous but most familiar (in daily life) form of induction and provides knowledge of “What is plausible, contingent or accidental; knowledge relating to convention, human affairs.” This form of induction explains lawyer jokes. (What’s the difference between a good lawyer and a bad lawyer? A bad lawyer makes your case drag on for years. A good lawyer makes it last even longer.) Stereotypes fall under this form. As Goldberg has repeatedly shown us, stereotypes are often accurate, but the causes of these stereotypes is just as often in error. See also the work of Jussim on the correctness of many stereotypes.
Induction-probability of course is the subject of most of this book. It provides knowledge of “Accidental features, frequency of properties, correlations in populations” and the like. It is, as is well known by anybody reading these words, the most prone to error. But the error usually comes not in failing to see correlations and confusing accidental properties with essences, but in misascribing causes, in mistaking correlation for causation even though everybody knows this admonition. Knowledge of this “law” never seems to stop everybody violating it, though.
In trying to solve the “problem” of induction Stove and Williams separately showed that reasonably sized (but still small) samples can “resemble” their populations in essence. Their results should come as no surprise to any statistician familiar with sampling, though some philosophers over fond of Hume and his progency were taken aback: many do not seem to be able to resist the allure of doubt. On the other hand, Stove and Williams and their sympathizers do not appear to understand that they are demonstrating the soundness of only one type of induction, i.e. induction-probability.
In general, and across all types or flavors, an induction is an inference about the unobserved conditioned that which has already been observed, where the propositions of both (unobserved and possibly unobservable and the observed) are in some way similar, i.e. share an essence. Induction is another way to speak of essences, with varying degrees of certainty depending on the type. But there is another sense of the word initiated by Carnap and others. To them, induction is any (believed) inference not deductively valid, which is a much wider class of inferences than what I mean by inductive inferences, an enormous class which contains even absurdities. “If the moon is made of cheese then it’s noon”” is “inductive” in this sense, thus it is not surprising that some see a problem with “induction” when it is put into such loose terms. Part of the difficulty is that modern logicians will call this proposition true because of its (logical) form whereas the civilian, scientist, and old-school logician will think the modernist is nuts. Why insist on such bizarre arguments! It’s not form that is imporant, but cause and essence, knowledge of which is provided by real induction. Since our goal is to understand (logical) probability, I adopt the older meanings of induction, as given above. I also do not use, recommend, or like Carnap’s use of the term “inductive probability”. Probability is probability, but Carnap often used his term to apply to non-inductive arguments.
All these forms of induction have “stages” or levels. The first is ordinary, the simple noticing of how things are, like sugar tends to be sweet, ice floats, that people walk. The second is the identification of accidents, like all people on this street are wearing (or rather affecting) peasant clothing (jeans). The third is the abstraction of necessary properties, that people have two legs, that they breathe air, that one plus one must equal two. Finally comes the realization of essences, that having two legs, a heart, and the powers of rationality is what makes a human. Probability models, as we shall see, come into play in each.
Here is an example tying induction to essence. Suppose I observe a raven. It’s black. I see a second, also black. And so on for a few dozen more. I reason, or rather I argue with myself, “Since all the many ravens I’ve seen have been black, the next raven I see will be black.” There are seeming problems with this self-argument, this induction-argument. It appears to be invalid since, as is probably obvious, it might be that a non-black raven, perhaps even an albino raven, exists somewhere. And if that’s true, then the next raven I see might not be black. Also, the argument is incomplete—as written, though not as thought. As thought, it contains the implicit premise “All ravens are the same color.” That makes the entire argument: R = “All ravens are the same color and every raven I have seen was black; therefore the next raven I see will be black.” That argument is valid.
Therefore, it is a local truth that “The next raven I see will be black” given those premises. We are back to the same kind of situation as when we discussed Gettier problems. What is our goal here? Is it to assess the truth or falsity of the premises? Or to make predictions? Given the premises are true, then it necessarily follows I will make flawless predictions.
Now “every raven I have seen is black” is true (I promise), so the only question is “All ravens are the same color.” Where did that arise? That was an induction-intuition, arising from the judgment that having black feathers is the essence of being a raven, or at least part of the essence. If this judgement is true, if having black feathers is essential to being a raven, then the this premise is also true and the conclusion to R follows. The crux is thus the step, i.e. the induction, from the observations to an understanding of what it is to be a raven. But there have been observed white ravens, and it is said (by biologists) that these suffer from a genetic defect. A defect is thus a departure from the “norm”, from what is expected, and what is expected is the form given by the essence. With this in mind we can fix the argument. R’ = “All the ravens I’ve seen have been black and it is the essence of ravens to be black; therefore the next raven I see which is properly manifesting its essence will be black.” This is a valid argument, and sound if indeed, as induction tells us, having black feathers is part of the essence of being a raven.
Some people have mistakenly identified features thought to be essential but which were instead accidents. It is not essential that all swans have white feathers; some have black. But because mistakes are made in the induction of essences does not prove that inductions are of no use. Many people make mistakes in math—probably more than who make mistakes in inductions of essences—yet we do not say math is a “problem”, where that word is used in its philosophical sense as an unresolved or paradoxical question.
Another example. Who is taller, men or women? Obviously, some women are taller than some men, but everybody knows, via induction from finite observations, that it is the nature or essence of men to be taller. Probabilistically, as we shall learn, it means only that if all we know the nature of sex differences and that A is a man and B a woman, the chance A is taller than B is greater than 50%. Not everybody knows why this sex difference is so, nor need they know why. It is enough for most decisions and questions to know it exists. Knowing more, we will learn, we can tighten that probability, or come to a deeper understanding of biology. But these are niceties.
We know, via induction, that “men are taller than women”, and we know what that phrase means. Goldberg points out that the within-group difference of height in men or women is much larger than the between-men-women difference, but nobody is foolish enough to think this means that men and women are equally tall, or that the small between-group difference doesn’t lead to large differences both on average and at the extremes. Yet several very good scientists have been caught making the error that because within-group differences are larger than between-group differences, the between-group differences are, somehow, not there, i.e. that somehow induction has misled us. In other words, because men vary in height considerably, and women too, and because the average difference in height between men and women isn’t are large as these differences between men or between women, it doesn’t follow that induction has lied to us and that men aren’t taller than women. Of course, nobody does make this mistake regarding height differences. The error is usually made on more politically or socially charged questions.
Many applications of induction in science are made at the lowest induction-probability level, where our knowledge is the least surest. Experiments or observations are made to provide the grist for the inductive-probability mill. We observe most men are taller than most women, so we move, via induction-probability, to say it is of the essence of men to be taller than women, which has as shown a probabilistic interpretation. Male-female sex differences in height (and other characteristics) in this probabilistic sense is so firm a piece of knowledge that official “randomized controlled trials” (a term which is highly misleading, as we shall see) aren’t run to confirm it. But in most instances in science where experiments are run or observations taken we do not claim knowledge of essential differences. The methods used in classical procedures, both frequentist and Bayesian, are sometimes thought to prove these differences, but we later learn this view is false. We must always come to knowledge of essence via induction, and while experiments and observations can provide the grist they cannot turn the crank of the mill. This is done via intellection. Knowledge does not come via the result of some mathematical calculation such as hypothesis testing. In short, we can never do without induction in some form if we want to do science.
Grue
The so-called problem of grue (to be defined momentarily) was introduced by Nelson Goodman in 1954 as a “riddle” about induction, a riddle which has been widely thought to cast doubt on the validity and rationality of induction. That unnecessary doubt in turn is partly responsible for the reluctance to adopt the view that probability is part of logic. Several authors have pointed out deficiencies in grue. Nevertheless, the “problem” still excites some authors.
Here, adapted from Groarke (p. 65), is the basis of grue, along with another simple demonstration that the “problem” makes no sense (Groarke lists others, as does Stove). Grue is a predicate, like green or blue, but with a built-in ad hoc time component. Objects are grue if they are green and observed before (say) 21 October 1978 or fast and observed after that date. A green grape observed 20 October 1978 and a fast (say, white) car observed 22 October 1978 are grue. But if you saw the green grape after 21 October 1978, or remember seeing that fast car in 1976, then neither are grue. The definition changes with the arbitrary date.
Imagine it’s before the Date and you’ve seen or heard of only green emeralds. Induction (of what type?) says future, or rather all unobserved, emeralds will also be green. But since it’s before the Date, these emeralds are also grue, thus induction also says all unobserved emeralds will also be grue. Finally comes a point after the Date, and lo, a green and not a fast emerald appears, thus not a grue emerald. Induction, which told us that emerald should be grue, is broken!
Why is this wrong? The reason we expect (via induction) unobserved emeralds to be green is we expect that whatever is causing emeralds to be green will remain the same through time. Whether this is the formal, material, efficient, or final cause depends on the perspective one takes, of course, but unless there is other specific information, we expect constancy of cause. We comprehend the essence of what it is to be an emerald is unchanging. And that is what induction is: the understanding of this essence, an awareness of cause. Rather, that is one form of induction, as we now know.
Nobody has ever seen a fast emerald; neither are blithe, winsome, electrifying, salty, nor brutal emeralds observed. Nobody has ever seen a blue one either, yet it is blue that is the traditional alternate predicate stated in the “problem”, not fast or blithe, etc. The choice of alternate predicate is arbitrary; there is nothing special about blue. Using an absurd one like fast makes the so-called problem of grue disappear, because we realize that no emerald can suddenly change nature from green to fast. That is, our understanding (via induction) that it is the essence of emeralds to be green, that some thing or things are causing the greenness, is what leads us to reject the idea that this cause can suddenly switch and create blithe or fast emeralds instead of green ones.
Incidentally, there is no causation in the predicate grue, as has often been noted. Which is to say, the riddle does not suppose emeralds are changing their nature (meaning no change in any formal, material, efficient, or final cause takes place), but that induction is supposed to indicate that some change in nature should take place on the Date but doesn’t. After all, some thing or things must operate to cause the change. Grue, then, is a mix up in understanding causation.
Again, we do not know of any cause (or any type) that will switch emeralds abruptly from green-mode to blue-mode or to fast-mode. It is thus obvious that the predicate blue is what caused (in our minds) the difficulty all along. We observe that colors change in certain objects like flowers or cars. Via induction, we expect that this change is natural or is of the essence of these objects. Why? Because we’re aware of the causes of color change which make the object at one time this color and at another time that color. For instance, a leaf changing from green to red on a certain date. This does not shock because we are aware of the cause of this change. Amusingly, if we re-create the grue “problem” for the leaf using green and red, and we get the right date, then grue-type induction works for autumn leaves.
There was never anything wrong with induction. Far from causing us to doubt induction, thinking about grue strengthens the confidence we have in it because we realize that grue seemed problematic because it tortured our understanding of what caused emeralds to be green.
Groarke calls belief in Goodman’s grue “an adamant will to doubt rather than an evidence-based example of a deep problem with induction” and likens it to the fallacy of the false question (e.g. “Have you stopped using p-values yet?”). Groarke says (p. 65):
The proposition, “emeralds are grue,” [if true] can be unpacked into three separate claims: emeralds are green before time $t$ (proposition$_1$); emeralds are blue after time $t$ (proposition$_2$); and emeralds turn from green to blue at time $t$ (proposition$_3$). Goodman illegitimately translates support for proposition$_1$ into support for proposition$_2$ and proposition$_3$. But the fact that we have evidence in support of proposition does not give us any evidence in support of all three propositions taken together.
What does the arbitrary time have to do with the essential composition of an emerald? Not much; or rather, nothing. Again, the reason we expect (via induction) unobserved emeralds to be green is we expect that whatever is causing emeralds to be green will remain the same. That is, the essence of what it is to be an emerald is unchanging, and that is what induction is: the understanding of this essence, an awareness of cause. Groarke emphasizes that the time we observe something is not a fact about the object, but a fact about us. And what is part of us is not part of the object. Plus, the only evidence anybody has, at this point in time, is that all observed emeralds have been green. We even have a chemical explanation for why this is so, which paradox enthusiasts must ignore. Thus “there is absolutely no evidence that any emeralds are blue [or fast or hyperbolic or tall or etc.] if observed after time $t$.”
Somewhat related to induction and grue is Kripke’s quus example, which I’ll hold off on until discussing under-determination in Chapter on causality.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
William, are you a G-man?
Great stuff as always!
Thank you for this lesson. I appreciate it very much.
I'm reading for sure.