For a long time I've been interested in TensorFlow. I've heard of the amazing things that people have achieved with this framework, and while I really wanted to dable in it, it seemed a daunting thing to install, let alone learn. The great news is that the Google brain team has made tremendous strides in making TensorFlow easy to install and use! Check out these earlier posts:
In this post we will see how easy it is to modify the regression tutorial published by the Google team to Boston House Prices data. We will learn:
!pip install -q seaborn
pip install -U scikit-learn
Initiate TensorFlow (tf), and load other libraries that we will need later (e.g. matplotlib, pandas, seaborn)
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
First we import a the boston house prices dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.data.shape) #get (numer of rows, number of columns or 'features')
print(boston.DESCR) #get a description of the dataset
# Next, we load the data into a 'dataframe' object for easier manipulation, and also print the first few rows in order to examine it
data = pd.DataFrame(boston.data, columns=boston.feature_names)
data.head() #notice that the target variable (MEDV) is not included
#For some reason, the loaded data does not include the target variable (MEDV), we add it here
data['MEDV'] = pd.Series(data=boston.target, index=data.index)
data.describe() #get some basic stats on the dataset
data.tail() #check out the end of the data (last 5 rows)
See if there is missing data:
data.isna().sum()
There is no missing data. Good! Let's proceed to split the data into a random 70% for training, and remainder for testing. Remember we did a similar split for the linear regression example using the boston house price dataset.
train_dataset = data.sample(frac=0.7,random_state=0)
test_dataset = data.drop(train_dataset.index)
Have a quick look at the joint distribution of a few pairs of columns from the training set.
sns.pairplot(train_dataset[["MEDV", "CRIM","AGE","DIS","TAX"]], diag_kind="kde")
Also look at overall statistics:
train_stats = train_dataset.describe()
train_stats.pop("MEDV")
train_stats = train_stats.transpose()
train_stats
Separate the target value, or "label", from the features. This label is the value that you will train the model to predict.
train_labels = train_dataset.pop('MEDV')
test_labels = test_dataset.pop('MEDV')
Look again at the train_stats
block above and note how different the ranges of each feature are.
It is good practice to normalize features that use different scales and ranges. Although the model might converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input.
Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on.
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
This normalized data is what we will use to train the model.
Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier. That includes the test set as well as live data when the model is used in production.
Let's build our model. Here, we'll use a Sequential
model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, build_model
, since we'll create a second model, later on.
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model();
Use the .summary
method to print a simple description of the model
model.summary()
Now try out the model. Take a batch of 10
examples from the training data and call model.predict
on it.
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
It seems to be working, and it produces a result of the expected shape and type.
Train the model for 1000 epochs, and record the training and validation accuracy in the history
object.
Note the validation_split
set to use 20% of the training data as validation set and the remainder as calibration. Important to note that this is separate from the testing data that we do not touch in the model-training.
# Display training progress by printing a single dot for each completed epoch
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
Visualize the model's training progress using the stats stored in the history
object.
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MEDV]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MEDV^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
This graph shows little improvement, or actually a fairly severe degradation in the validation error after about 100 epochs. Let's update the model.fit
call to automatically stop training when the validation score doesn't improve. We'll use an EarlyStopping callback that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training.
You can learn more about this callback here.
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
Let's re-plot the history to hopefully see the model training stopping before things get worse for the validation data.
plot_history(history)
The graph shows that on the validation set, the average error is usually around +/- 2 MEDV (or +/- 2,000 dollars from the true median value of owner-occupied homes). This is pretty good! And as we shall soon see, much better than the linear regression model!
Let's see how well the model generalizes by using the test set, which we did not use at all when training the model. This tells us how well we can expect the model to predict when we use it in the real world.
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f} MEDV".format(mae))
Finally, we predict MEDV values using data in the testing set (and also training, which we will use in the next step to compute more error metrics):
test_predictions = model.predict(normed_test_data).flatten()
train_predictions = model.predict(normed_train_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MEDV]')
plt.ylabel('Predictions [MEDV]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
The graph above looks pretty, pretty, pretty, good! (pardon the Curb reference!). To get more than a visual understanding of the error, let's compute some error metrics.
We start with developing an empirical distribution of the error term (this is a very useful piece of code!). If you compare this to the distribution you saw in the OLS example, you can spot that this model definitely has a better performance.
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MEDV]")
_ = plt.ylabel("Count")
Just like we did in the OLS example, let's calculate the mean-squared error, mean absolute error, and the r-squared error on training and testing. This is useful as we can see the extent to which performance degrades from training to testing data (note: we expect this degradation).
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
mse = mean_squared_error(test_labels, test_predictions)
print('Mean Squared Error: ',mse)
mae = mean_absolute_error(test_labels, test_predictions)
print('Mean Absolute Error: ',mae)
rsq = r2_score(train_labels,train_predictions) #R-Squared on the training data
print('R-square, Training: ',rsq)
rsq = r2_score(test_labels,test_predictions) #R-Squared on the testing data
print('R-square, Testing: ',rsq)
If you compare the error statistics above to the OLS example using Boston House Price data, you will see that the metrics are vastly better! An average error of +/- \$2,400 vs \\$3,500! Also, the R-square value is much better at 88% on training data vs. 69%. You can perform minor modifications of the training routine (e.g. more epochs, different activation function, etc.) to get even better results.
This notebook shows how tremendously powerful the tensor flow neural network technique can be to craft highly predictive models as compared to regression. Of course, regression models are easier to communicate given they are much more broadly known and understood. The perfect model for your application will depend not just on the predictive accuracy, but also on how effective it is given your application and target audience.
If you have ideas on how to improve this post, please let me know: https://predictivemodeler.com/feedback/