Least Angle Regression (LAR)

A Basic Example Using the Boston House Prices Dataset

Table of Contents

Overview

Linear regression (i.e. ordinary least squares) is one of the most commonly used statistical modeling technique. In this example, we execute a linear regression model (lr) and then compare it to a Least Angle Regression (lar). We will not go into the mathematical details of either model. A few resources are listed below if you are interested in a deeper dive.

Data Exploration

First we import a the boston house prices dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.

We randomly select a third of our data to be the 'test' dataset. This way we can train our model on 2/3 of the data, and test it on the remainder. Once we are confident that our model is generalizing well (i.e. there is not a HUGE different in the training/testing performance, or in other words, not obviously overfitting), then we can use all of our data to train the model.

Basic Linear Regression

In linear regression we assume that the relationship between the independent variables (X) and the dependent variable (Y) is linear and then go about finding one that minimizes the squared error between the predicted Y and the actual Y.

$$ {y}_i = \beta_0 + \beta_1 {x}_i + \epsilon_i $$

We now import the LinearRegression method from the sklearn library. Note that the process of creating the model involves the very simple command lm.fit(X,Y). This runs the model and we find the intercept-term, $\beta_0$, and the coefficient $\beta_1$ that minimizes the squared errors.

Least Angle Regression

From SCIKIT:

Least-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the feature most correlated with the target. When there are multiple features having equal correlation, instead of continuing along the same feature, it proceeds in a direction equiangular between the features.

The advantages of LARS are:

  1. It is numerically efficient in contexts where the number of features is significantly greater than the number of samples.
  2. It is computationally just as fast as forward selection and has the same order of complexity as ordinary least squares.
  3. It produces a full piecewise linear solution path, which is useful in cross-validation or similar attempts to tune the model.
  4. If two features are almost equally correlated with the target, then their coefficients should increase at approximately the same rate. The algorithm thus behaves as intuition would expect, and also is more stable.
  5. It is easily modified to produce solutions for other estimators, like the Lasso.

The disadvantages of the LARS method include:

  1. Because LARS is based upon an iterative refitting of the residuals, it would appear to be especially sensitive to the effects of noise. This problem is discussed in detail by Weisberg in the discussion section of the Efron et al. (2004) Annals of Statistics article.

We measure the accuracy of the new approach using R-squared metric, and compare it to linear regression. The performance of this algorithm on this dataset is inferior to that of linear regression (on both training and testing data). However the benefit of LAR is that instead of linear regression, it results in a simpler model (compare coefficients of linear regression to lar, and note the zeroed coefficients for variables that were not selected). LAR is similar to forward selection method.

You can explore the linear regression model further, here: https://predictivemodeler.com/2019/08/19/py-ols-boston-house-prices/

Feedback

If you have ideas on how to improve this post, please let me know: https://predictivemodeler.com/feedback/

Reference: py.lar_boston