We can think of a random variable as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a random result. For example, rolling a die and recording the outcome yields a random variable with range {1,2,3,4,5,6}. Picking a random person and measuring their height yields another random variable.
Mathematically, a random variable is defined as a measurable function from a probability space to some measurable space. This measurable space is the space of possible values of the variable, and it is usually taken to be the real numbers with the Borel σ-algebra, and we will always assume this in this encyclopedia, unless otherwise specified.
|
If a random variable X:Ω->R defined on the probability space (Ω, P) is given, we can ask questions like "How likely is it that the value of X is bigger than 2?". This is the same as the probability of the event {s in Ω : X(s) > 2} which is often written as P(X > 2) for short.
Recording all these probabilities of output ranges of a real-valued random variable X yields the probability distribution of X. The probability distribution "forgets" about the particular probability space used to define X and only records the probabilities of various values of X. Such a probability distribution can always be captured by its cumulative distribution function
If we have a random variable X on Ω and a measurable function f:R->R, then Y=f(X) will also be a random variable on Ω, since the composition of measurable functions is measurable. The same procedure that allowed one to go from a probability space (Ω,P) to (R,dFX) can be used to obtain the probability distribution of Y. The cumulative distribution function of Y is
Let X be a real-valued random variable and let Y = X2. Then,
If y<0, then Prob(X2≤y)=0, so
If y≥0, then Prob(X2≤y)=Prob(|X|≤√y)=Prob(-√y≤X≤√y), so
The probability distribution of random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted E[X]. Note that in general, E[f(X)] is not the same as f(E[X]). Once the "average value" is known, one could then ask how far from this average value the values of X typically are, a question that is answered by the variance and standard deviation of a random variable.
Mathematically, this is known as the (generalised) problem of moments[?]: for a given class of random variables X, find a collection {fi} of functions such that the expectation values E[fi(X)] fully characterize the distribution of the random variable X.
Much of mathematical statistics consists in proving convergence results for certain sequences of random variables; see for instance the law of large numbers and the central limit theorem.
There are various senses in which a sequence (Xn) of random variables can converge to a random variable X. These are explained in the article on convergence of random variables.
Search Encyclopedia
|
Featured Article
|