Linear regression (i.e. ordinary least squares) is one of the most commonly used statistical modeling technique. In this example, we execute a linear regression model (lr) and then compare it to a Stochastic Gradient Descent (sgd) method. We will not go into the mathematical details of either model. A few resources are listed below if you are interested in a deeper dive.
First we import a the boston house prices dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.data.shape) #get (numer of rows, number of columns or 'features')
print(boston.DESCR) #get a description of the dataset
data = pd.DataFrame(boston.data, columns=boston.feature_names)
data.head()
#For some reason, the loaded data does not include the target variable (MEDV), we add it here
data['MEDV'] = pd.Series(data=boston.target, index=data.index)
data.describe() #get some basic stats on the dataset
#Load the independent variables (the x1, x2, etc.) into a dataframe object called 'X'. Similarly for the dependent variable 'Y'
X = data.drop('MEDV', axis = 1) #define independent predictor set (excluding the dependent variable)
Y = data['MEDV'] #define the target values (i.e. the dependent variable)
We randomly select a third of our data to be the 'test' dataset. This way we can train our model on 2/3 of the data, and test it on the remainder. Once we are confident that our model is generalizing well (i.e. there is not a HUGE different in the training/testing performance, or in other words, not obviously overfitting), then we can use all of our data to train the model.
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
In linear regression we assume that the relationship between the independent variables (X) and the dependent variable (Y) is linear and then go about finding one that minimizes the squared error between the predicted Y and the actual Y.
$$ {y}_i = \beta_0 + \beta_1 {x}_i + \epsilon_i $$We now import the LinearRegression method from the sklearn library. Note that the process of creating the model involves the very simple command lm.fit(X,Y). This runs the model and we find the intercept-term, $\beta_0$, and the coefficient $\beta_1$ that minimizes the squared errors.
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,Y_train)
y_pred_train = lm.predict(X_train) #predictions on training data
y_pred_test = lm.predict(X_test) #predictions on testing data
# We plot predicted Y (y-axis) against actual Y (x-axis). Perfect predictions will lie on the diagonal. We see the diagonal trend, suggesting a 'good' fit
import matplotlib.pyplot as plt
plt.scatter(Y_test,y_pred_test)
plt.xlabel("Actual Price: $Y_i$")
plt.ylabel("Predicted Price: $\hat{Y}_i$")
plt.title("Predicted Price vs Actual Price: $Y_i$ vs $\hat{Y}_i$")
plt.show()
#Let's get the coefficients - observe that some are significantly negative
print('Intercept term: ',lm.intercept_) # This gives us the intercept term
print('Coefficients: \n',lm.coef_) # This gives us the coefficients (in the case of this model, just one coefficient)
Stochastic Gradient Descent (or SGD) is an iterative optimization technique that approximates a smooth, differentiable gradient. While calculating the actual gradient requires all of the data, SGD estimates it using a randomly selected subset of the data. The advantages of this algorithm are simplicity, efficiency of computation and ease of implementation. The disadvantages include having a lot of parameters to tune, the sensitivity of the algorithm to the scale or units of the independent variables, and having to iterate potentially many times without really knowing whether the solution you have in the end is a local or global minima for what you are trying to optimize.
import numpy as np
from sklearn.linear_model import SGDRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
sgd = make_pipeline(StandardScaler(),SGDRegressor(max_iter=1000, tol=1e-3)) # Always scale the input. The most convenient way is to use a pipeline.
sgd.fit(X_train, Y_train)
#Get predictions from SGD on training and on test data
y_pred_train_sgd = sgd.predict(X_train)
y_pred_test_sgd = sgd.predict(X_test)
We measure the accuracy of the SGD approach using R-squared metric, and compare it to linear regression. The performance of this algorithm on this dataset is very similar to that of linear regression (on both training and testing data).
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
print('R-square, Training, Linear Regression: ', r2_score(Y_train,Y_pred_train)) #R-Squared from linear regression, on the training data
print('R-square, Testing, Linear Regression: ', r2_score(Y_test,Y_pred_test)) #R-Squared from linear regression, on the training data
print('R-square, Training, SGD: ', r2_score(Y_train,y_pred_train_sgd)) #R-Squared from SGD, Training
print('R-square, Training, SGD: ', r2_score(Y_test,y_pred_test_sgd)) #R-Squared from SGD, Testing
#Plot results from linear model and SGD
plt.scatter(Y_test, y_pred_test, label='LR')
plt.scatter(Y_test, y_pred_test_sgd, label='SGD')
plt.xlabel("Prediction: $\hat{Y}_i$")
plt.ylabel("Actual: $Y_i$")
plt.title("Actual vs Predicted")
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) # Place a legend to the right of this smaller subplot.
plt.show()
You can explore the linear regression model further, here: https://predictivemodeler.com/2019/08/19/py-ols-boston-house-prices/
If you have ideas on how to improve this post, please let me know: https://predictivemodeler.com/feedback/