I've often heard "0" probability described as "impossibility" and "1" as "certainty." It is of no explanatory advantage at all to say that "number 0" means "probability 0." The reader already knew that. :-) --LMS
That's the whole problem. What the reader "knows", versus what the reader thinks he or she knows, it not the same here. (I just read your peer-review comments, so you can't slither away on a "common sense" argument.) The problem really is in the fact that probability is based on the notion of measure theory.
In a nutshell, the probability of any particular event, including taking the values 0 or 1, is 0. That is, it forms a set of no measure. For a concrete example, integrating f between a and a equals 0. Which is how random variables assign values, through the action of the associated integral. So the "certain" and "impossible" events have no measure, and we cannot speak of them within probability theory. If this seems strange, it's related to the notion of denumerability (countability). Denumerable sets have measure zero. The easiest example of a denumerable set is the set of all rational numbers.
Ok, I just reread the main text. I concede that I did not make the point clear, and it could, should, and probably will be redone by someone smarter than me by the time I peruse down the relevant texts and ponder it a bit. If not, I will be back after I think about it some more.
Of course, but I'm not writing for mathematicians. To ordinary human beings, converting from one notation to another is indeed a "computation", although typing "1÷2=" into a calculator is a pretty simple one. I think it's important to show laymen the different ways they might see probabilities expressed and how they relate. That information is more knowledge about language than about math, but again I see no reason why a page about "probability" should be limited to strict mathematics. Perhaps there is a better way to word the above to be more rigorous and also useful to a lay audience, but I can't think of it off hand. --LDC
So the main page was changed back. Fine. wiki wiki wiki
The most I will concede is Bremaud's definition (P. Bremaud, An Introduction to Probabilistic Modeling, p.4). The event E such that Prob(E) = 1 is the "almost certain" event, the E' such that P(E') = 0 is the "almost impossible" event. I stress the technical aspect of these definitions: "almost" refers to the fact that these events both occur (as does any other event) with probability 0, that is, with no measure. This stuff is really sticky, no doubt.
I agree with LDC that the page should be accessible to lay audiences, but let's not insult their intelligence. Just because the distinctions are subtle (subtile heh) is no reason to hide them, and these fine points really are part of the story of how and why probability works, in both the pure and applied sense.
Further complicating everything is the distinction between the continuous and discrete aspects of probability. I notice that Probability is (lately) classified as discrete math, but thats not entirely correct, and is a topic for a different day. :)
Oh yeah... common sense would say that P(E) = 0 means impossible, etc., but that doesn't lead to useful definitions for a theory of probability (if it did, the introductory texts would teach it that way instead of studiously avoiding the topic). I am really hoping someone currently teaching a grad class in this will set us all straight.
I just want to point out that if you dogmatically state what probability is and how probability claims are to be interpreted, as though this were "known by scientists," you fail to do justice to the fact that there are academics, from a wide variety of fields, who dispute about the very questions on which you are dogmatic. Moreover, an encyclopedia article called "probability" should do justice to all sides of this disputation. Anyway, I totally agree with your last sentence! --LMS
The beauty of it all is that the dogma is "mathematical truth". One starts with definitions, then develops a theory standing on those definitions. I hold very tightly to this dogma. There may be academics from a wide variety of fields that may dispute the origin, foundations and claims of probability theory, but I rather suspect none of these academicians are practicing, contemporary mathematicians (unless they be constructivists... and even they would likely uphold the discrete aspect. But lets not go there :).
Lest this not be construed as a neutral point of view, open any text on the foundations of probability and measure theory. That's where I learned it.
My entire point really is that the current theory of probability confounds common sense in some aspects. The places where this happens ("impossibility" etc) are a result of the construction of the theory. We should embrace this, not gloass over it.
Perhaps I should stop defending and ask some of my own questions, to wit: what does "impossible" mean, and how can we say with any "certainty" (whatever that means) what is or is not impossible? "Almost everywhere", "almost impossible" and "almost certain" have precise mathematical definitions in probability. Attributing any other meaning is philosophy, not mathematics.
The question of what "certainty" means is a philosophical one, indeed it is the whole subject of epistemology, and is irrelevant here. The question of how to apply the mathematics of probability to real-world situations is also a philosophical one, and is the same as the question of how to apply scientific findings to real life. That, too, is a subject entirely irrelevant here. Putting links to articles about those philosophical questions is appropriate here, so feel free to do so. But this article is about probability itself as a subject. The mathematics of probability--which, like all mathematics, exist entire independently of any interpretation or application thereof--assigns the number "1" to mean "certainty", by definition, and "0" to mean "impossibility", by definition, without taking any philosophical position on what those terms mean. The ordinary interpretation of those terms to ordinary circumstances of life (like rolling dice) is entirely obvious, useful, and clear. The fact that some philosophers argue about it is a subject for some other article; I, and our readers, are well-served in their ordinary lives by the simple understanding that the probability of drawing the 17 of hearts from a deck of cards is 0, and the probability of drawing a card with two sides is 1. If you want to enlighten them with some deeper understanding, write about it and put link here. Until then, let's keep the text here practical and useful. --LDC
Lee, you are speaking for all of our readers? Well, then, by all means I bow to your authority. My participation in this conversation is necessarily over.
That depends on whether your definition of "=" implies that 0 = 1/∞ (or more exactly, 1/ℵ1 since we're talking reals). When dealing with infinitesimals, it may not be appropriate to define it that way. And it is just a matter of definition, as is all of mathematics. --LDC
Maybe this example is a tiny bit more down to Earth: say you flip a fair coin repeatedly, keep doing it forever. How likely is it that you get Tail all the way through, forever? Pretty unlikely. In fact, the probability can be shown to be 0. But truly impossible it is not. It's just not going to happen. And if you don't like zeros all the way through: how likely is it that you flip the binary expansion of π (say Tail = 0, Head = 1)? Again, the probability is 0. In fact, every sequence you produce this way has probability zero, even though one of them will happen.
However, these effects only show up if you do "artificial things" such as flipping a coin forever or picking a random real number. For everyday probabilities, which are always discrete probabilities, the notions of zero probability and impossibility are indeed identical. AxelBoldt
I'd like the article to emphasize that mathematical probability has absolutely nothing to do with the intuitive notions of probability people use in everyday life. In everyday life, a set of events each have a probability of happening and one of those events happens.
In mathematics, the notion of only one of those events occuring is sheer nonsense. (Mathematics is incapable of distinguishing between "can" happen and "does" happen so if only one event "does" happen then that's because it has probability 1.)
In everyday life, it takes many iterations of an experiment, each with its singular outcome, to be able to reconstruct a guess about the probability. In mathematics, all of the outcomes happen and the probability distribution is exact.
This is not a trivial issue but has enormous impact on people's (even many physicists') incomprehension of probability as it is used in physics. In particular, the notion of "probability" used by the Copenhagen interpretation of QM is literal nonsense. It's an intuition about the everyday world which people have transported into physics without justification; an intuition with no mathematical basis. The only kind of interpretation of QM compatible with mathematical probability is the one given by Many-Worlds. Again, this is not a trivial issue.
By the way, is it only my impression or have people stopped teaching the mathematical definition of probability in stats courses? The only textbook I found which contained a formal definition of probability (as opposed to relying on people's intuition) was pretty old. I base my understanding of probability on the formal definition, of course.
Search Encyclopedia
|
Featured Article
|