Forgive me my friends, because of reasons I am terribly far behind on everything. Regular posting to resume soon.
While back in a post, I was ranting about something like p-values, or some such, and advocating one of my standard fixes in an off-hand manner. This prompted some intelligent questions from a reader at the Substack mirror (all posts are identical at blog and SS).
Now an intelligent person would have made note of what the original post was. I can’t remember. However, we don’t need it for the questions.
Les Fleurs du mal
Some questions for you. I apologize if you already answered this in your book as I have not had a chance to read it.
[Quoting me:] “The fix? There is no fix. There is a slight repair we can make, by acknowledging the conditional nature of probability, that it is only epistemological, that at a minimum the only way to trust any statistical model is to observe that it is has made skillful (a technical term) useful (a technical term) predictions of data never before seen or used in any way.”
Question 1: Which specific statistical practices do you believe should be deprecated?
Question 2: How do you propose quantifying uncertainty without using statistical models? Furthermore, how could uncertainty in real-world propositions be quantified without the use of models?
Question 3: What type of evidence or studies could alter your opinion that significant changes are needed in the way statistics is practiced?
Question 4: What specific recommendations do you have for how researchers can better convey that statistical findings are contingent on modeling assumptions?
All the answers are indeed in Uncertainty: here’s a brief summary of them here.
Q 1: All parameter-centric analyses. Get rid of them. They’re outta here.
If probability doesn’t exist, and it does not, then the parameters—the knobs and dials—inside probability models exist even less. Yet concentration is everywhere, or nearly everywhere, on these non-existent little creations of our minds.
The key reason is historical: the math was easy, and done by hand. The modern reason is the mistaken idea the parameters are real and have “true values” that cause things to happen.
What the customer wants is answers to questions like “If I do X, what happens to Y?” If some fields, like parts of physics and chemistry, we tell him. In any statistical field, we do not tell, and instead substitute the question for another (without informing the customer), and start spouting things about parameters, which we hint are “really” X.
Solution: answer “If I do X, what happens to Y?”
Q 2: I don’t. Not in formal applications. One can model; why not? But the answers should be put in terms of observables, as in Q1. And then tested, preferably by disinterested parties.
Of course, in most real-world thinking we never, or rarely, formally quantify our uncertainties. And we don’t have to. These are still models, though. Models are everywhere and necessary to thinking. We test our everyday models, continuously, too. That’s why we get good at judging commonplace uncertainties.
Models: chance I spill the coffee with this much in the cup and walking along this path; chance I’ll need to buy this extra steak for the week, in case you-know-who drops by; chance the boss will harass you if he sees you in the ante meridian.
On and on an on. All models, none quantified. You cannot think without them. Scientists merely make the process formal, which is no bad thing per se. It becomes bad when scientists commit the Deadly Sin of Reification. As in Q1.
Q3: All fields which primarily use statistics are bad to greater and lesser degree. The most harmfully bad are epidemiology and public health, because we have to suffer under the thumbs of midwit Experts who rely on their “research”.
Q4: I’ve given acres of advice along these things, all boiling down to this: consider how you might be wrong.
Which cannot be taken. Not by academics, at any rate, who must publish or perish, who must bring in grant dollars or die. There is little time for introspection and self doubt.
This, too, varies by field. And you have to understand something about Yours Truly. As a pathologist of bad and putrescent science, I have become jaded. There is good science out there. It hasn’t entirely ceased.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
"There is good science out there. It hasn’t entirely ceased."
No. Not entirely. But "it", (science in general), has fallen prey to the loathsome infection that is socialism. Some days we seem well on our way to re-implementing Lysenkoism. Think masks, or six foot spacing, or gene therapies promoted as "vaccines". Look at how governments have assumed virtually total control over research funding. ALL funding.
Then think about all of the research that DOES NOT happen!
I have more questions. None I can put into words, as statistics has always confounded me. Perhaps I shall read Uncertainty for some clarity.