Encyclopedia > Linear regression

  Article Content

Linear regression

Linear regression is a method of data analysis intended to be used with a set of paired observations on two variables on the same set of statistical units. Conventionally, we refer to one of the variables as independent (usually labeled <math>x</math>) and the other as dependent (labeled <math>y</math>).

The notion of an independent variable often (but not always) implies the ability to choose the levels of the independent variable and that the dependent variable will respond naturally as in the stimulus-response model. The independent variable x may be a scalar or a vector. In the former case we may write one of the simplest linear-regression models as follows:

<math>y_i = \alpha + \beta x_i + \epsilon_i</math>
where <math>\epsilon_i</math> is a random "error".

Historically, in applications to measurements in astronomy, the "error" was actually a random measurement error, but in many applications, ε is merely the amount by which the individual <math>y</math>-value differs from the average <math>y</math>-value among individuals having the same <math>x</math>-value. The average value of the random "error" <math>\epsilon</math> is zero. Often in linear regression problems statisticians rely on the Gauss-Markov assumptions:

  1. The random errors <math>\epsilon_i</math> have expected value 0.
  2. The random errors <math>\epsilon_i</math> are uncorrelated (this is weaker than an assumption of probabilistic independence).
  3. The random errors <math>\epsilon_i</math> are "homoscedastic", i.e., they all have the same variance.
(See also Gauss-Markov theorem. That result says that under the assumptions above, least-squares estimators are in a certain sense optimal.)

Sometimes stronger assumptions are relied on:

  1. The random errors <math>\epsilon_i</math> have expected value 0.
  2. They are independent.
  3. They are normally distributed.
  4. They all have the same variance.

If <math>x_i</math> is a vector we can take the product <math>\beta x_i</math> to be a "dot-product".

It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of <math>y = \alpha + \beta x</math> is a line. But in fact, if the model is

<math>y_i = \alpha + \beta x_i + \gamma x_i^2 + \epsilon_i</math>

(in which case we have put the vector <math>(x_i, x_i^2)</math> in the role formerly played by <math>x_i</math> and the vector <math>(\beta, \gamma)</math> in the role formerly played by <math>\beta</math>), then the problem is still one of linear regression, even though the graph is not a straight line. The rationale for this terminology will be explained below.

A statistician will usually estimate the unobservable values of the parameters α and β by the method of least squares, which consists of finding the values of <math>a</math> and <math>b</math> that minimize the sum of squares of the residuals

<math>e_i = y_i - (a + bx_i).</math>
Those values are the "least-squares estimates." The residuals may be regarded as estimates of the errors.

Notice that, whereas the errors are independent, the residuals cannot be independent because the use of least-squares estimates implies that the sum of the residuals must be 0, and the dot-product of the vector of residuals with the vector of <math>x</math>-values must be 0, i.e., we must have

<math>e_1 + \cdots + e_n = 0</math>
and
<math>e_1 x_1 + \cdots + e_n x_n = 0.</math>
These two linear constraints imply that the vector of residuals must lie within a certain <math>(n - 2)</math>-dimensional subspace of <math>R^n</math>; hence we say that there are "<math>n - 2</math> degrees of freedom for error". If one assumes the errors are normally distributed and independent, then it can be shown to follow that 1) the sum of squares of residuals
<math>\varepsilon_1^2 + \cdots + \varepsilon_n^2</math>
is distributed as
<math>\sigma^2 \chi^2_{n - 2},</math>
i.e., the sum of squares divided by the error-variance <math>\sigma^2</math>, has a chi-square distribution with <math>n - 2</math> degrees of freedom, and 2) the sum of squares of residuals is actually probabilistically independent of the estimates <math>a</math>, <math>b</math> of the parameters <math>\alpha</math> and <math>\beta</math>.

These facts make it possible to use Student's t-distribution with <math>n - 2</math> degrees of freedom (so named in honor of the pseudonymous "Student") to find confidence intervals for <math>\alpha</math> and <math>\beta</math>.

Denote by capital Y the column vector whose ith entry is yi, and by capital X the n x 2 matrix whose second column contains the xi as its ith entry, and whose first column contains n 1s. Let ε be the column vector containing the errors εi. Let δ and d be respectively the 2x1 column vector containing α and β and the 2x1 column vector containing the estimates a and b. Then the model can be written as

<math>Y = X \delta + \epsilon</math>

where ε is normally distributed with expected value 0 (i.e., a column vector of 0s) and variance σ2 In, where In is the n x n identity matrix. The matrix Xd (where (remember) d is the vector of estimates) is then the orthogonal projection of Y onto the column space of X.

Then it can be shown that

<math>d = (X' X)^{-1} X' Y</math>

(where X' is the transpose of X) and the sum of squares of residuals is

<math>Y' (I_n - X (X' X)^{-1} X') Y.</math>

The fact that the matrix X(X'X)-1X' is a symmetric idempotent matrix is incessantly relied on both in computations and in proofs of theorems. The linearity of d as a function of the vector Y, expressed above by saying d = (X' X)-1 X' Y, is the reason why this is called "linear" regression. Nonlinear regression uses nonlinear methods of estimation.

The matrix In - X (X' X)-1 X' that appears above is a symmetric idempotent matrix of rank n - 2. Here is an example of the use of that fact in the theory of linear regression. The finite-dimensional spectral theorem of linear algebra says that any real symmetric matrix M can be diagonalized by an orthogonal matrix G, i.e., the matrix G'MG is a diagonal matrix. If the matrix M is also idempotent, then the diagonal entries in G'MG must be idempotent numbers. Only two real numbers are idempotent: 0 and 1. So In-X(X'X)-1X', after diagonalization, has n-2 0s and two 1s on the diagonal. That is most of the work in showing that the sum of squares of residuals has a chi-square distribution with n-2 degrees of freedom.

Note: A useful alternative to linear regression is robust regression[?] in which mean absolute error is minimized instead of mean squared error as in linear regression. Robust regression is computationally much more intensive than linear regression and is somewhat more difficult to implement as well.

Table of contents

Summarizing the data

We sum the observations, the squares of the Y's and X's and the products of X*Y to obtain the following quantities.

<math>S_X = X_1 + X_2 + \cdots + X_n</math>
and <math>S_Y</math> similarly.
<math>S_{XX} = X_1^2 + X_2^2 + \cdots + X_n^2</math>
and SYY similarly.
<math>S_{XY} = X_1 Y_1 + X_2 Y_2 + \cdots + X_n Y_n</math>

Estimating beta

We use the summary statistics above to calculate b, the estimate of beta.

<math>b = {n S_{XY} - S_X S_Y \over n S_{XX} - S_X S_X}</math>

Estimating alpha

We use the estimate of beta and the other statistics to estimate alpha by:

<math>a = {S_Y - b S_X \over n}</math>

Displaying the residuals

The first method of displaying the residuals use the histogram or cumulative distribution to depict the similarity (or lack thereof) to a normal distribution. Non-normality suggests that the model may not be a good summary description of the data.

We plot the residuals, <math>(Y - a - bX)</math> against the independent variable, X. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:

  • Residuals increase (or decrease) as the independent variable increases -- indicates mistakes in the calculations -- find the mistakes and correct them.
  • Residuals first rise and then fall (or first fall and then rise) -- indicates that the appropriate model is (at least) quadratic. See polynomial regression[?].
  • One residual is much larger than the others and opposite in sign -- suggests that there is one unusual observation which is distorting the fit --
    • Verify its value before publishing or
    • Eliminate it, document your decision to do so, and recalculate the statistics.

Ancillary statistics

The sum of squared deviations can be partitioned as in ANOVA to indicate what part of the dispersion of the dependent variable is explained by the independent variable.

The correlation coefficient, r, can be calculated by

<math>r = {n S_{XY} - S_X S_Y \over \sqrt{(n S_{XX} - S_X^2) (n S_{YY} - S_Y^2)}}</math>

This statistic is a measure of how well a straight line describes the data. Values near zero suggest that the model is ineffective. r2 is frequently interpreted as the fraction of the variability explained by the independent variable, X.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Lake Ronkonkoma, New York

... the age of 18 living with them, 59.8% are married couples living together, 10.9% have a female householder with no husband present, and 25.2% are non-families. 20.3% of ...

 
 
 
This page was created in 26.4 ms