Am pretty excited about this example. LazyPredict automatically scores data with a bunch of different models allowing the user to see performance across a variety of methods. I think this is where things are headed.
Here is more information about this method: https://pypi.org/project/lazypredict/
The first step is to install it. You can go to Anaconda, click on the environment, and then open the terminal using the "play button" next to the environment you want to install it in. Then you can enter the command: pip install lazypredict
NOTE: You may also need to install modules called tqdm
, xgboost
, lightgbm
, and pytest
. You can repeat the process about to install via: pip install tqdm
This post assumes that you have:
First we import a the boston house prices dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.data.shape) #get (numer of rows, number of columns or 'features')
print(boston.DESCR) #get a description of the dataset
(506, 13) .. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L. This is a copy of UCI ML housing dataset. https://archive.ics.uci.edu/ml/machine-learning-databases/housing/ This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics ...', Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. .. topic:: References - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261. - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
data = pd.DataFrame(boston.data, columns=boston.feature_names)
data.head()
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.00632 | 18.0 | 2.31 | 0.0 | 0.538 | 6.575 | 65.2 | 4.0900 | 1.0 | 296.0 | 15.3 | 396.90 | 4.98 |
1 | 0.02731 | 0.0 | 7.07 | 0.0 | 0.469 | 6.421 | 78.9 | 4.9671 | 2.0 | 242.0 | 17.8 | 396.90 | 9.14 |
2 | 0.02729 | 0.0 | 7.07 | 0.0 | 0.469 | 7.185 | 61.1 | 4.9671 | 2.0 | 242.0 | 17.8 | 392.83 | 4.03 |
3 | 0.03237 | 0.0 | 2.18 | 0.0 | 0.458 | 6.998 | 45.8 | 6.0622 | 3.0 | 222.0 | 18.7 | 394.63 | 2.94 |
4 | 0.06905 | 0.0 | 2.18 | 0.0 | 0.458 | 7.147 | 54.2 | 6.0622 | 3.0 | 222.0 | 18.7 | 396.90 | 5.33 |
#For some reason, the loaded data does not include the target variable (MEDV), we add it here
data['MEDV'] = pd.Series(data=boston.target, index=data.index)
data.describe() #get some basic stats on the dataset
CRIM | ZN | INDUS | CHAS | NOX | RM | AGE | DIS | RAD | TAX | PTRATIO | B | LSTAT | MEDV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 | 506.000000 |
mean | 3.613524 | 11.363636 | 11.136779 | 0.069170 | 0.554695 | 6.284634 | 68.574901 | 3.795043 | 9.549407 | 408.237154 | 18.455534 | 356.674032 | 12.653063 | 22.532806 |
std | 8.601545 | 23.322453 | 6.860353 | 0.253994 | 0.115878 | 0.702617 | 28.148861 | 2.105710 | 8.707259 | 168.537116 | 2.164946 | 91.294864 | 7.141062 | 9.197104 |
min | 0.006320 | 0.000000 | 0.460000 | 0.000000 | 0.385000 | 3.561000 | 2.900000 | 1.129600 | 1.000000 | 187.000000 | 12.600000 | 0.320000 | 1.730000 | 5.000000 |
25% | 0.082045 | 0.000000 | 5.190000 | 0.000000 | 0.449000 | 5.885500 | 45.025000 | 2.100175 | 4.000000 | 279.000000 | 17.400000 | 375.377500 | 6.950000 | 17.025000 |
50% | 0.256510 | 0.000000 | 9.690000 | 0.000000 | 0.538000 | 6.208500 | 77.500000 | 3.207450 | 5.000000 | 330.000000 | 19.050000 | 391.440000 | 11.360000 | 21.200000 |
75% | 3.677083 | 12.500000 | 18.100000 | 0.000000 | 0.624000 | 6.623500 | 94.075000 | 5.188425 | 24.000000 | 666.000000 | 20.200000 | 396.225000 | 16.955000 | 25.000000 |
max | 88.976200 | 100.000000 | 27.740000 | 1.000000 | 0.871000 | 8.780000 | 100.000000 | 12.126500 | 24.000000 | 711.000000 | 22.000000 | 396.900000 | 37.970000 | 50.000000 |
#Load the independent variables (the x1, x2, etc.) into a dataframe object called 'X'. Similarly for the dependent variable 'Y'
X = data.drop('MEDV', axis = 1) #define independent predictor set (excluding the dependent variable)
Y = data['MEDV'] #define the target values (i.e. the dependent variable)
We randomly select a third of our data to be the 'test' dataset. This way we can train our model on 2/3 of the data, and test it on the remainder. Once we are confident that our model is generalizing well (i.e. there is not a HUGE different in the training/testing performance, or in other words, not obviously overfitting), then we can use all of our data to train the model.
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.33, random_state = 5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
(339, 13) (167, 13) (339,) (167,)
X_train_arr, Y_train_arr = X_train.to_numpy(), Y_train.to_numpy() #we need to convert the data frames to arrays to work with the code below
X_test_arr, Y_test_arr = X_test.to_numpy(), Y_test.to_numpy()
We run LazyPredict:
from lazypredict.Supervised import LazyRegressor
from sklearn import datasets
from sklearn.utils import shuffle
import numpy as np
#The following code also creates randomized train/test samples. Since we are using the samples created above, we comment out this section of the sample code provided by the developers of lazypredict.
#boston = datasets.load_boston() #load the boston dataset
#X, y = shuffle(boston.data, boston.target, random_state=13) #this shuffles the data, keeping data/targets (or x and y) together
#X = X.astype(np.float32) #converts dataframe to float
#offset = int(X.shape[0] * 0.9) #this gets 90% of data for trainin, and 10% for testing
#X_train, y_train = X[:offset], y[:offset]
#X_test, y_test = X[offset:], y[offset:]
reg = LazyRegressor(verbose=0,ignore_warnings=False, custom_metric=None)
models,predictions = reg.fit(X_train_arr, X_test_arr, Y_train_arr, Y_test_arr)
print(models)
98%|█████████▊| 42/43 [00:02<00:00, 21.07it/s]
StackingRegressor model failed to execute __init__() missing 1 required positional argument: 'estimators'
100%|██████████| 43/43 [00:02<00:00, 18.36it/s]
R-Squared RMSE Time Taken Model GradientBoostingRegressor 0.91 2.84 0.09 ExtraTreesRegressor 0.90 3.12 0.19 RandomForestRegressor 0.90 3.14 0.25 XGBRegressor 0.88 3.33 0.13 BaggingRegressor 0.88 3.37 0.03 HistGradientBoostingRegressor 0.86 3.68 0.46 AdaBoostRegressor 0.85 3.76 0.08 LGBMRegressor 0.84 3.89 0.07 PoissonRegressor 0.77 4.69 0.02 ExtraTreeRegressor 0.75 4.84 0.01 KNeighborsRegressor 0.71 5.22 0.02 SGDRegressor 0.70 5.33 0.01 LassoCV 0.70 5.34 0.09 Ridge 0.70 5.34 0.01 LassoLarsIC 0.70 5.34 0.01 LassoLarsCV 0.70 5.34 0.03 LinearRegression 0.70 5.34 0.01 TransformedTargetRegressor 0.70 5.34 0.01 BayesianRidge 0.70 5.35 0.01 ElasticNetCV 0.69 5.35 0.08 RidgeCV 0.69 5.36 0.01 DecisionTreeRegressor 0.68 5.44 0.02 LarsCV 0.67 5.52 0.04 OrthogonalMatchingPursuitCV 0.67 5.55 0.01 HuberRegressor 0.65 5.69 0.03 LinearSVR 0.65 5.76 0.01 Lars 0.64 5.80 0.02 MLPRegressor 0.63 5.86 0.37 Lasso 0.63 5.89 0.01 ElasticNet 0.59 6.18 0.01 GammaRegressor 0.59 6.20 0.01 GeneralizedLinearRegressor 0.58 6.24 0.01 TweedieRegressor 0.58 6.24 0.01 SVR 0.54 6.56 0.01 OrthogonalMatchingPursuit 0.54 6.59 0.01 NuSVR 0.53 6.62 0.01 RANSACRegressor 0.45 7.20 0.07 PassiveAggressiveRegressor 0.35 7.78 0.01 GaussianProcessRegressor 0.02 9.59 0.02 DummyRegressor -0.00 9.68 0.01 LassoLars -0.00 9.68 0.01 KernelRidge -5.04 23.80 0.02
You can compare to the OLS method: https://predictivemodeler.com/2019/08/19/py-ols-boston-house-prices/
If you have ideas on how to improve this post, please let me know: https://predictivemodeler.com/feedback/