Python Non-Negative Least Squares (NNLS) Regression

A Basic Example Using the Boston House Prices Dataset

Overview

Linear regression (i.e. ordinary least squares) is one of the most commonly used statistical modeling technique. It works wonderfully in many situations, but there are scenarios in which we want the coefficients to be non-negative (e.g. where we don't want the prediction to be negative. For example, if we are predicting price of something, and negative price would not make sense in that application).

In this example, we turn to something called non-linear least squares (nnls) regression. We will not go into the mathematical details of this model. A few resources are listed below if you are interested in a deeper dive.

I will note that nnls approximates linear regression, and the performance of this model will always be worse than linear regression. There is a cost to putting constraints on the coefficients of a linear model, and we explore that too in the example below.

Data Exploration

First we import a the boston house prices dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.

In [1]:
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.data.shape) #get (numer of rows, number of columns or 'features')
print(boston.DESCR) #get a description of the dataset
(506, 13)
.. _boston_dataset:

Boston house prices dataset
---------------------------

**Data Set Characteristics:**  

    :Number of Instances: 506 

    :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.

    :Attribute Information (in order):
        - CRIM     per capita crime rate by town
        - ZN       proportion of residential land zoned for lots over 25,000 sq.ft.
        - INDUS    proportion of non-retail business acres per town
        - CHAS     Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
        - NOX      nitric oxides concentration (parts per 10 million)
        - RM       average number of rooms per dwelling
        - AGE      proportion of owner-occupied units built prior to 1940
        - DIS      weighted distances to five Boston employment centres
        - RAD      index of accessibility to radial highways
        - TAX      full-value property-tax rate per $10,000
        - PTRATIO  pupil-teacher ratio by town
        - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
        - LSTAT    % lower status of the population
        - MEDV     Median value of owner-occupied homes in $1000's

    :Missing Attribute Values: None

    :Creator: Harrison, D. and Rubinfeld, D.L.

This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/


This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.

The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978.   Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980.   N.B. Various transformations are used in the table on
pages 244-261 of the latter.

The Boston house-price data has been used in many machine learning papers that address regression
problems.   
     
.. topic:: References

   - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
   - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.

In [2]:
data = pd.DataFrame(boston.data, columns=boston.feature_names)
data.head()
Out[2]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT
0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 15.3 396.90 4.98
1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 17.8 396.90 9.14
2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 17.8 392.83 4.03
3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 18.7 394.63 2.94
4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 18.7 396.90 5.33
In [3]:
#For some reason, the loaded data does not include the target variable (MEDV), we add it here
data['MEDV'] = pd.Series(data=boston.target, index=data.index)
data.describe() #get some basic stats on the dataset
Out[3]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT MEDV
count 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000 506.000000
mean 3.613524 11.363636 11.136779 0.069170 0.554695 6.284634 68.574901 3.795043 9.549407 408.237154 18.455534 356.674032 12.653063 22.532806
std 8.601545 23.322453 6.860353 0.253994 0.115878 0.702617 28.148861 2.105710 8.707259 168.537116 2.164946 91.294864 7.141062 9.197104
min 0.006320 0.000000 0.460000 0.000000 0.385000 3.561000 2.900000 1.129600 1.000000 187.000000 12.600000 0.320000 1.730000 5.000000
25% 0.082045 0.000000 5.190000 0.000000 0.449000 5.885500 45.025000 2.100175 4.000000 279.000000 17.400000 375.377500 6.950000 17.025000
50% 0.256510 0.000000 9.690000 0.000000 0.538000 6.208500 77.500000 3.207450 5.000000 330.000000 19.050000 391.440000 11.360000 21.200000
75% 3.677083 12.500000 18.100000 0.000000 0.624000 6.623500 94.075000 5.188425 24.000000 666.000000 20.200000 396.225000 16.955000 25.000000
max 88.976200 100.000000 27.740000 1.000000 0.871000 8.780000 100.000000 12.126500 24.000000 711.000000 22.000000 396.900000 37.970000 50.000000
In [4]:
#Load the independent variables (the x1, x2, etc.) into a dataframe object called 'X'. Similarly for the dependent variable 'Y'
X = data.drop('MEDV', axis = 1) #define independent predictor set (excluding the dependent variable)
Y = data['MEDV'] #define the target values (i.e. the dependent variable)

We randomly select a third of our data to be the 'test' dataset. This way we can train our model on 2/3 of the data, and test it on the remainder. Once we are confident that our model is generalizing well (i.e. there is not a HUGE different in the training/testing performance, or in other words, not obviously overfitting), then we can use all of our data to train the model.

In [5]:
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
(339, 13)
(167, 13)
(339,)
(167,)

Basic Linear Regression

In linear regression we assume that the relationship between the independent variables (X) and the dependent variable (Y) is linear and then go about finding one that minimizes the squared error between the predicted Y and the actual Y.

$$ {y}_i = \beta_0 + \beta_1 {x}_i + \epsilon_i $$

We now import the LinearRegression method from the sklearn library. Note that the process of creating the model involves the very simple command lm.fit(X,Y). This runs the model and we find the intercept-term, $\beta_0$, and the coefficient $\beta_1$ that minimizes the squared errors.

In [6]:
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,Y_train)
Y_pred_train = lm.predict(X_train) #predictions on training data
Y_pred = lm.predict(X_test) #predictions on testing data
# We plot predicted Y (y-axis) against actual Y (x-axis). Perfect predictions will lie on the diagonal. We see the diagonal trend, suggesting a 'good' fit
import matplotlib.pyplot as plt
plt.scatter(Y_test,Y_pred)
plt.xlabel("Actual Price: $Y_i$")
plt.ylabel("Predicted Price: $\hat{Y}_i$")
plt.title("Predicted Price vs Actual Price: $Y_i$ vs $\hat{Y}_i$")
plt.show()
In [7]:
#Let's get the coefficients - observe that some are significantly negative
print('Intercept term: ',lm.intercept_) # This gives us the intercept term
print('Coefficients: \n',lm.coef_) # This gives us the coefficients (in the case of this model, just one coefficient)
Intercept term:  32.8589326340861
Coefficients: 
 [-1.56381297e-01  3.85490972e-02 -2.50629921e-02  7.86439684e-01
 -1.29469121e+01  4.00268857e+00 -1.16023395e-02 -1.36828811e+00
  3.41756915e-01 -1.35148823e-02 -9.88866034e-01  1.20588215e-02
 -4.72644280e-01]

Non-Negative Least Squares (NNLS) Regression

NNLS is a technique in mathematical optimization where we don't allow the regression coefficients to become negative. We still want to minimize the squared error. That is, given a matrix X and a column vector of dependent Y, the goal is to find:

$$ \textrm{arg min}_{b} \mid\mid \textbf{Xb - Y} \mid\mid_{2} \textrm{subject to b}\geq0 \\ \textit{where } \mid\mid \textbf{.} \mid\mid_{2} \textit{is the euclidean norm} $$
In [8]:
from scipy.optimize import nnls
Coef_nnls, rnorm =nnls(X_train.to_numpy(), Y_train.to_numpy(), maxiter=1000)
Coef_nnls
Out[8]:
array([0.        , 0.1072718 , 0.        , 2.46918177, 0.        ,
       2.92558464, 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.00873924, 0.        ])
In [9]:
import numpy as np
#Merge stuff
Train_Data=X_train.copy()
Train_Data['MEDV']=Y_train
Train_Data['Y_pred']=Y_pred_train
Train_Data.head()
Out[9]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT MEDV Y_pred
435 11.16040 0.0 18.10 0.0 0.740 6.629 94.6 2.1247 24.0 666.0 20.2 109.85 23.27 13.4 13.160729
88 0.05660 0.0 3.41 0.0 0.489 7.007 86.3 3.4217 2.0 270.0 17.8 396.90 5.50 23.6 30.416545
365 4.55587 0.0 18.10 0.0 0.718 3.561 87.9 1.6132 24.0 666.0 20.2 354.70 7.12 27.5 13.561560
242 0.10290 30.0 4.93 0.0 0.428 6.358 52.9 7.0355 6.0 300.0 16.6 372.75 11.22 22.2 24.315972
461 3.69311 0.0 18.10 0.0 0.713 6.376 88.4 2.5671 24.0 666.0 20.2 391.43 14.65 17.7 20.601680
In [10]:
C1=np.array(pd.DataFrame(Coef_nnls).iloc[0])
C2=np.array(pd.DataFrame(Coef_nnls).iloc[1])
C3=np.array(pd.DataFrame(Coef_nnls).iloc[2])
C4=np.array(pd.DataFrame(Coef_nnls).iloc[3])
C5=np.array(pd.DataFrame(Coef_nnls).iloc[4])
C6=np.array(pd.DataFrame(Coef_nnls).iloc[5])
C7=np.array(pd.DataFrame(Coef_nnls).iloc[6])
C8=np.array(pd.DataFrame(Coef_nnls).iloc[7])
C9=np.array(pd.DataFrame(Coef_nnls).iloc[8])
C10=np.array(pd.DataFrame(Coef_nnls).iloc[9])
C11=np.array(pd.DataFrame(Coef_nnls).iloc[10])
C12=np.array(pd.DataFrame(Coef_nnls).iloc[11])
C13=np.array(pd.DataFrame(Coef_nnls).iloc[12])
Train_Data['Y_nnls']=C1*Train_Data['CRIM']+C2*Train_Data['ZN']+C3*Train_Data['INDUS']+C4*Train_Data['CHAS']+C5*Train_Data['NOX']+C6*Train_Data['RM']+C7*Train_Data['AGE']+C8*Train_Data['DIS']+C9*Train_Data['RAD']+C10*Train_Data['TAX']+C11*Train_Data['PTRATIO']+C12*Train_Data['B']+C13*Train_Data['LSTAT']
Train_Data
Out[10]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT MEDV Y_pred Y_nnls
435 11.16040 0.0 18.10 0.0 0.740 6.629 94.6 2.1247 24.0 666.0 20.2 109.85 23.27 13.4 13.160729 20.353706
88 0.05660 0.0 3.41 0.0 0.489 7.007 86.3 3.4217 2.0 270.0 17.8 396.90 5.50 23.6 30.416545 23.968175
365 4.55587 0.0 18.10 0.0 0.718 3.561 87.9 1.6132 24.0 666.0 20.2 354.70 7.12 27.5 13.561560 13.517814
242 0.10290 30.0 4.93 0.0 0.428 6.358 52.9 7.0355 6.0 300.0 16.6 372.75 11.22 22.2 24.315972 25.076572
461 3.69311 0.0 18.10 0.0 0.713 6.376 88.4 2.5671 24.0 666.0 20.2 391.43 14.65 17.7 20.601680 22.074327
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
486 5.69175 0.0 18.10 0.0 0.583 6.114 79.8 3.5459 24.0 666.0 20.2 392.68 14.98 19.1 19.543125 21.318748
189 0.08370 45.0 3.44 0.0 0.437 7.185 38.9 4.5667 5.0 398.0 15.2 396.90 5.39 34.9 34.433652 29.316160
495 0.17899 0.0 9.69 0.0 0.585 5.670 28.8 2.7986 6.0 391.0 19.2 393.29 17.60 23.1 17.750013 20.025120
206 0.22969 0.0 10.59 0.0 0.489 6.326 52.5 4.3549 4.0 277.0 18.6 394.87 10.97 24.4 23.786940 21.958111
355 0.10659 80.0 1.91 0.0 0.413 5.936 19.5 10.5857 4.0 334.0 22.0 376.04 5.57 20.6 16.580648 29.234318

339 rows × 16 columns

We measure the accuracy of NNLS approach using R-squared metric, and compare it to linear regression. Note the large degradation in performance in this example. A large degradation in performance is by no means a given. For large and complex datasets, the performance can often be quite close to linear regression - just something you have to test out when considering the benefits of NNLS.

In [11]:
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
rsq = r2_score(Y_train,Y_pred_train) #R-Squared on the training data
print('R-square, Linear Regression: ',rsq)
rsq = r2_score(Train_Data['MEDV'],Train_Data['Y_pred']) #R-Squared from least-squares regression
print('R-square, Linear Regression: ',rsq)
rsq = r2_score(Train_Data['MEDV'],Train_Data['Y_nnls']) #R-Squared from nnls
print('R-square, NNLS: ',rsq)
R-square, Linear Regression:  0.7551332741779997
R-square, Linear Regression:  0.7551332741779997
R-square, NNLS:  0.3905219488933651

You can compare this example to the OLS method: https://predictivemodeler.com/2019/08/19/py-ols-boston-house-prices/

Feedback

If you have ideas on how to improve this post, please let me know: https://predictivemodeler.com/feedback/

Reference: py.nnls_boston