In
statistics, an
estimator is a function of the known data that is used to estimate an unknown
parameter. Many different estimators are possible for any given parameter. Some criterion is used to choose between the estimators. Often, a criterion cannot clearly pick one estimator over another.
Two types of estimators: point estimators, and interval estimators.
Point estimators
For a point estimator θ of parameter θ:
- The bias of θ is defined as B(θ) = E[θ] - θ
- θ is an unbiased estimator of θ iff B(θ) = 0 for all θ
- The mean square error of θ is defined as MSE(θ) = E[(θ-θ)^{2}]
- MSE(θ) = V(θ) + (B(θ))^{2}
- The standard deviation of θ is also called the standard error of θ.
where V(X) is the variance of X and E is the expected value operator.
Occasionally one chooses the unbiased estimator with the lowest variance. Sometimes it is preferable not to limit oneself to unbiased estimators; see Bias (statistics). Concerning such "best unbiased estimators", see also Gauss-Markov theorem.
See also Maximum likelihood.
All Wikipedia text
is available under the
terms of the GNU Free Documentation License