JJ Couey, who hosts a podcast well known to some of you, and friend of the Broken Science Initiative, asked me about the so-called Law of Large Numbers.
I'm the navigator for my family when we're out (best at using Google Maps), and believe me, it would be a lot easier if the map was, in fact, the terrain.
In market research, for example, sample selection and making sure that the samples are representative are paramount. Missing a trend simply because your sample didn't encompass enough types of people is a waste of funds.
In my former life as an environmental test engineer working with inertial guidance instruments (accelerometers/gyroscopes), the main focus of my job was to test for inertial error terms caused by mechanical flexing within the accelerometer housing. These were electromechanical instruments, not solid state.
The "modelers" decided that they could do away with a very expensive vendor test by "modeling" out this one particular error term by adding more mechanical rigidity to a certain structure within the instruments housing. In theory it seemed possible. In reality the modeled change made very little difference in the error terms. In the end, a whole lot of money was spent trying to "model" out an error term in order to save money on environmental testing of each instrument at the vendor facility. The design folks spent a lot of time trying to poke holes in my real world data. They were not successful. That particular error term is still tested for and corrected for in the Trident missile guidance system.
You mean, individual events have individual causes and there isn't some fluffy blob of generic cause slime that rains down on us and makes things happen in a probabilistic fashion? Helpful as always Dr. Briggs. Statistics without a hypothesized cause, and a cause without a proposed mechanism are utter garbage. If you haven't walked over and looked at the machine you don't know anything about it. When did we forget that there are 3 kinds of lies that get progressively worse? Lies, damn lies, and statistics. Oh sorry, truth is anecdotal.
Regarding whether things ever "have" a probability -- what about the propensity theory that is inherent in the stable (robust) probabilities which are found with radioactive decay?
For example, the half-life of Iodine-123 is 13.13 hours, but that for Iodine-125 is 1,443.36 hours (60.14 days). If you began with a trillion (10^12) atoms of each and looked again at them 14 hours later, can't you say there'd be a higher chance of finding less than half a trillion atoms in Iodine-123 than there would be with Iodine-125?
And isn't that higher chance due to a "property" of the type of atom?
When I was regularly playing poker I regularly questioned good players about this problem. We have a finite time and hands are dealt slowly. So in theory we can create a theory about probabilities but being a luckier player could result in a much better outcome in our finite number of hands.
I'm the navigator for my family when we're out (best at using Google Maps), and believe me, it would be a lot easier if the map was, in fact, the terrain.
In market research, for example, sample selection and making sure that the samples are representative are paramount. Missing a trend simply because your sample didn't encompass enough types of people is a waste of funds.
In my former life as an environmental test engineer working with inertial guidance instruments (accelerometers/gyroscopes), the main focus of my job was to test for inertial error terms caused by mechanical flexing within the accelerometer housing. These were electromechanical instruments, not solid state.
The "modelers" decided that they could do away with a very expensive vendor test by "modeling" out this one particular error term by adding more mechanical rigidity to a certain structure within the instruments housing. In theory it seemed possible. In reality the modeled change made very little difference in the error terms. In the end, a whole lot of money was spent trying to "model" out an error term in order to save money on environmental testing of each instrument at the vendor facility. The design folks spent a lot of time trying to poke holes in my real world data. They were not successful. That particular error term is still tested for and corrected for in the Trident missile guidance system.
You mean, individual events have individual causes and there isn't some fluffy blob of generic cause slime that rains down on us and makes things happen in a probabilistic fashion? Helpful as always Dr. Briggs. Statistics without a hypothesized cause, and a cause without a proposed mechanism are utter garbage. If you haven't walked over and looked at the machine you don't know anything about it. When did we forget that there are 3 kinds of lies that get progressively worse? Lies, damn lies, and statistics. Oh sorry, truth is anecdotal.
Regarding whether things ever "have" a probability -- what about the propensity theory that is inherent in the stable (robust) probabilities which are found with radioactive decay?
For example, the half-life of Iodine-123 is 13.13 hours, but that for Iodine-125 is 1,443.36 hours (60.14 days). If you began with a trillion (10^12) atoms of each and looked again at them 14 hours later, can't you say there'd be a higher chance of finding less than half a trillion atoms in Iodine-123 than there would be with Iodine-125?
And isn't that higher chance due to a "property" of the type of atom?
When I was regularly playing poker I regularly questioned good players about this problem. We have a finite time and hands are dealt slowly. So in theory we can create a theory about probabilities but being a luckier player could result in a much better outcome in our finite number of hands.
Yep.