Sweet Lover helps environmental conservation.

# by user267658217

## simple linear regression

by Sweet Lover on March 10, 2012

# Simple linear regression

Okun's lawinmacroeconomicsis an example of the simple linear regression. Here the dependent variable (GDP growth) is presumed to be in a linear relationship with the changes in the unemployment rate.

Instatistics,simple linear regressionis theleast squaresestimator of alinear regression modelwith a singleexplanatory variable. In other words, simple linear regression fits a straight line through the set ofnpoints in such a way that makes the sum of squaredresidualsof the model (that is, vertical distances between the points of the data set and the fitted line) as small as possible.

The adjectivesimplerefers to the fact that this regression is one of the simplest in statistics. The fitted line has the slope equal to thecorrelationbetweenyandxcorrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that it passes through the center of mass (x,y) of the data points.

Other regression methods besides the simpleordinary least squares(OLS) also exist (seelinear regression model). In particular, when one wants to do regression by eye, people usually tend to draw a slightly steeper line, closer to the one produced by thetotal least squaresmethod. This occurs because it is more natural for one's mind to consider the orthogonal distances from the observations to the regression line, rather than the vertical ones as OLS method does.

## Fitting the regression line

Suppose there arendata points {yi,xi}, wherei = 1, 2, …,n. The goal is to find the equation of the straight line

$y = \alpha + \beta x, \,$

which would provide a "best" fit for the data points. Here the "best" will be understood as in theleast-squaresapproach: such a line that minimizes the sum of squared residuals of the linear regression model. In other words, numbersαandβsolve the following minimization problem:

$\text{Find }\min_{\alpha,\,\beta}Q(\alpha,\beta),\text{ where } Q(\alpha,\beta) = \sum_{i=1}^n\hat{\varepsilon}_i^{\,2} = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2\$

By using eithercalculus, the geometry ofinner product spacesor simply expanding to get a quadratic inαandβ, it can be shown that the values ofαandβthat minimize the objective functionQ[1]are

\begin{align} \hat\beta & = \frac{ \sum_{i=1}^{n} (x_{i}-\bar{x})(y_{i}-\bar{y}) }{ \sum_{i=1}^{n} (x_{i}-\bar{x})^2 } = \frac{ \sum_{i=1}^{n}{x_{i}y_{i}} - \frac1n \sum_{i=1}^{n}{x_{i}}\sum_{j=1}^{n}{y_{j}}}{ \sum_{i=1}^{n}({x_{i}^2}) - \frac1n (\sum_{i=1}^{n}{x_{i}})^2 } \\[6pt] & = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \overline{x^2} - \bar{x}^2 } = \frac{ \operatorname{Cov}[x,y] }{ \operatorname{Var}[x] } = r_{xy} \frac{s_y}{s_x}, \\[6pt] \hat\alpha & = \bar{y} - \hat\beta\,\bar{x}, \end{align}

nice blog

14 months ago