The notion of an independent variable often (but not always) implies the ability to choose the levels of the independent variable and that the dependent variable will respond naturally as in the stimulusresponse model. The independent variable x may be a scalar or a vector. In the former case we may write one of the simplest linearregression models as follows:
Historically, in applications to measurements in astronomy, the "error" was actually a random measurement error, but in many applications, ε is merely the amount by which the individual <math>y</math>value differs from the average <math>y</math>value among individuals having the same <math>x</math>value. The average value of the random "error" <math>\epsilon</math> is zero. Often in linear regression problems statisticians rely on the GaussMarkov assumptions:
Sometimes stronger assumptions are relied on:
If <math>x_i</math> is a vector we can take the product <math>\beta x_i</math> to be a "dotproduct".
It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of <math>y = \alpha + \beta x</math> is a line. But in fact, if the model is
(in which case we have put the vector <math>(x_i, x_i^2)</math> in the role formerly played by <math>x_i</math> and the vector <math>(\beta, \gamma)</math> in the role formerly played by <math>\beta</math>), then the problem is still one of linear regression, even though the graph is not a straight line. The rationale for this terminology will be explained below.
A statistician will usually estimate the unobservable values of the parameters α and β by the method of least squares, which consists of finding the values of <math>a</math> and <math>b</math> that minimize the sum of squares of the residuals
Notice that, whereas the errors are independent, the residuals cannot be independent because the use of leastsquares estimates implies that the sum of the residuals must be 0, and the dotproduct of the vector of residuals with the vector of <math>x</math>values must be 0, i.e., we must have
These facts make it possible to use Student's tdistribution with <math>n  2</math> degrees of freedom (so named in honor of the pseudonymous "Student") to find confidence intervals for <math>\alpha</math> and <math>\beta</math>.
Denote by capital Y the column vector whose ith entry is y_{i}, and by capital X the n x 2 matrix whose second column contains the x_{i} as its ith entry, and whose first column contains n 1s. Let ε be the column vector containing the errors ε_{i}. Let δ and d be respectively the 2x1 column vector containing α and β and the 2x1 column vector containing the estimates a and b. Then the model can be written as
where ε is normally distributed with expected value 0 (i.e., a column vector of 0s) and variance σ^{2} I_{n}, where I_{n} is the n x n identity matrix. The matrix Xd (where (remember) d is the vector of estimates) is then the orthogonal projection of Y onto the column space of X.
Then it can be shown that
(where X' is the transpose of X) and the sum of squares of residuals is
The fact that the matrix X(X'X)^{1}X' is a symmetric idempotent matrix is incessantly relied on both in computations and in proofs of theorems. The linearity of d as a function of the vector Y, expressed above by saying d = (X' X)^{1} X' Y, is the reason why this is called "linear" regression. Nonlinear regression uses nonlinear methods of estimation.
The matrix I_{n}  X (X' X)^{1} X' that appears above is a symmetric idempotent matrix of rank n  2. Here is an example of the use of that fact in the theory of linear regression. The finitedimensional spectral theorem of linear algebra says that any real symmetric matrix M can be diagonalized by an orthogonal matrix G, i.e., the matrix G'MG is a diagonal matrix. If the matrix M is also idempotent, then the diagonal entries in G'MG must be idempotent numbers. Only two real numbers are idempotent: 0 and 1. So I_{n}X(X'X)^{1}X', after diagonalization, has n2 0s and two 1s on the diagonal. That is most of the work in showing that the sum of squares of residuals has a chisquare distribution with n2 degrees of freedom.

The first method of displaying the residuals use the histogram or cumulative distribution to depict the similarity (or lack thereof) to a normal distribution. Nonnormality suggests that the model may not be a good summary description of the data.
We plot the residuals, <math>(Y  a  bX)</math> against the independent variable, X. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:
The correlation coefficient, r, can be calculated by
This statistic is a measure of how well a straight line describes the data. Values near zero suggest that the model is ineffective. r^{2} is frequently interpreted as the fraction of the variability explained by the independent variable, X.
Search Encyclopedia
