## Encyclopedia > Probability axiom

Article Content

# Probability axioms

Redirected from Probability axiom

The probability $P$ of some event $E$ (denoted $P(E)$) is defined with respect to a "universe" or sample space $S$ of all possible elementary events in such a way that $P$ must satisfy the Kolmogorov axioms.

Alternatively, a probability can be interpreted as a measure on a sigma-algebra of subsets of the sample space, those subsets being the events, such that the measure of the whole set equals 1. This property is important, since it gives rise to the natural concept of conditional probability. Every set $A$ with non-zero probability defines another probability on the space:

$P(B \vert A) = {P(B \cap A) \over P(A)}.$
This is usually read as "probablity of $B$ given $A$". $B$ and $A$ are said to be independent if the conditional probability of $B$ given $A$ is the same as the probability of $B$.

In the case that the sample space is finite or countably infinite, a probability function can also be defined by its values on the elementary events $\{e_1\}, \{e_2\}, ...$ where $S = {e_1, e_2, ...}$

### First axiom

For any set $E$:
$0 \leq P(E) \leq 1.$
That is, the probability of an event set is represented by a real number between 0 and 1.

### Second axiom

$P(S) = 1$.
That is, the probability that some elementary event in the entire sample set will occur is 1, or certainty. More specifically, there are no elementary events outside the sample set. This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample set, then the probability of any subset cannot be defined either.

### Third axiom

Any sequence of mutually disjoint events $E_1, E_2, ...$ satisfies
$P(E_1 \cup E_2 \cup \cdots) = \sum P(E_i)$.
That is, the probability of an event set which is the union of other disjoint subsets is the sum of the probabilities of those subsets. This is called σ-additivity. If there is any overlap among the subsets this relation does not hold.

These axioms are known as the Kolmogorov axioms, after Andrey Kolmogorov who developed them.

From these axioms one can deduce other useful rules for calculating probabilities. For example:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

That is, the probability that A or B will happen is the sum of the probabilities that A will happen and that B will happen, minus the probability that A and B will happen.

$P(S - E) = 1 - P(E)$

That is, the probability that any event will not happen is 1 minus the probability that it will.

Using conditional probability as defined above, it also follows immediately that:

$P(A \cap B) = P(A) \cdot P(B \vert A)$

That is, the probability that A and B will happen is the probability that A will happen, times the probability that B will happen given that A happened. It then follows that A and B are independent if and only if

$P(A \cap B) = P(A) \cdot P(B)$.

All Wikipedia text is available under the terms of the GNU Free Documentation License

Search Encyclopedia
 Search over one million articles, find something about almost anything!

Featured Article
 Museums in England ... Cambridge Cornwall Barbara Hepworth Museum[?], St. Ives, Cornwall Penlee House, Penzance[?] Tate St. Ives[?] Cumbria Abbot Hall Art Gallery[?], ...