In this example, we execute a Decision Tree Classifier (dtc) model in order to classify plant species based on characteristic measurements of petals/sepals. We will not go into the mathematical details of the model. A few resources are listed below if you are interested in a deeper dive.
This script assumes that you have reviewed the following (or already have this know-how):
First we import a the iris dataset, and print a description of it so we can examine what is in the data. Remember in order to execute a 'cell' like the one below, you can 1) click on it and run it using the run button above or 2) click in the cell and hit shift+enter.
import pandas as pd from sklearn.datasets import load_iris iris = load_iris() print(iris.data.shape) #get (numer of rows, number of columns or 'features') print(iris.DESCR) #get a description of the dataset
(150, 4) .. _iris_dataset: Iris plants dataset -------------------- **Data Set Characteristics:** :Number of Instances: 150 (50 in each of three classes) :Number of Attributes: 4 numeric, predictive attributes and the class :Attribute Information: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - class: - Iris-Setosa - Iris-Versicolour - Iris-Virginica :Summary Statistics: ============== ==== ==== ======= ===== ==================== Min Max Mean SD Class Correlation ============== ==== ==== ======= ===== ==================== sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) ============== ==== ==== ======= ===== ==================== :Missing Attribute Values: None :Class Distribution: 33.3% for each of 3 classes. :Creator: R.A. Fisher :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov) :Date: July, 1988 The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher's paper. Note that it's the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points. This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. .. topic:: References - Fisher, R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950). - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis. (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218. - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System Structure and Classification Rule for Recognition in Partially Exposed Environments". IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-2, No. 1, 67-71. - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions on Information Theory, May 1972, 431-433. - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II conceptual clustering system finds 3 classes in the data. - Many, many more ...
data = pd.DataFrame(iris.data, columns=iris.feature_names) data.head()
|sepal length (cm)||sepal width (cm)||petal length (cm)||petal width (cm)|
data['Class'] = pd.Series(data=iris.target_names[iris.target], index=data.index) data.describe() #get some basic stats on the dataset
|sepal length (cm)||sepal width (cm)||petal length (cm)||petal width (cm)|
#Load the independent variables (the x1, x2, etc.) into a dataframe object called 'X'. Similarly for the dependent variable 'Y' X = data.drop('Class', axis = 1) #define independent predictor set (excluding the dependent variable) Y = data['Class'] #define the target values (i.e. the dependent variable)
We randomly select a quarter of our data to be the 'test' dataset. This way we can train our model on remaining data, and test it on data not used in training. Once we are confident that our model is generalizing well (i.e. there is not a HUGE different in the training/testing performance, or in other words, not obviously overfitting), then we can use all of our data to train the model.
from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25, random_state = 5) print(X_train.shape) print(X_test.shape) print(Y_train.shape) print(Y_test.shape)
(112, 4) (38, 4) (112,) (38,)
The goal of the basic decision tree is to find decision boundaries as a series of
if-then-else statements resulting in classifying observations into different categories.
For more information:
from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(X_train, Y_train)
[Text(133.92000000000002, 201.90857142857143, 'X <= 2.45\ngini = 0.666\nsamples = 112\nvalue = [38, 36, 38]'), Text(66.96000000000001, 170.84571428571428, 'gini = 0.0\nsamples = 38\nvalue = [38, 0, 0]'), Text(200.88000000000002, 170.84571428571428, 'X <= 1.75\ngini = 0.5\nsamples = 74\nvalue = [0, 36, 38]'), Text(133.92000000000002, 139.78285714285715, 'X <= 4.95\ngini = 0.142\nsamples = 39\nvalue = [0, 36, 3]'), Text(66.96000000000001, 108.72, 'gini = 0.0\nsamples = 1\nvalue = [0, 0, 1]'), Text(200.88000000000002, 108.72, 'X <= 7.05\ngini = 0.1\nsamples = 38\nvalue = [0, 36, 2]'), Text(133.92000000000002, 77.65714285714284, 'X <= 4.95\ngini = 0.053\nsamples = 37\nvalue = [0, 36, 1]'), Text(66.96000000000001, 46.59428571428572, 'gini = 0.0\nsamples = 34\nvalue = [0, 34, 0]'), Text(200.88000000000002, 46.59428571428572, 'X <= 2.45\ngini = 0.444\nsamples = 3\nvalue = [0, 2, 1]'), Text(133.92000000000002, 15.531428571428563, 'gini = 0.0\nsamples = 1\nvalue = [0, 0, 1]'), Text(267.84000000000003, 15.531428571428563, 'gini = 0.0\nsamples = 2\nvalue = [0, 2, 0]'), Text(267.84000000000003, 77.65714285714284, 'gini = 0.0\nsamples = 1\nvalue = [0, 0, 1]'), Text(267.84000000000003, 139.78285714285715, 'gini = 0.0\nsamples = 35\nvalue = [0, 0, 35]')]
import graphviz #to install the graphviz package, you may use: conda install -c anaconda graphviz dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph
Using the above charge you can see the classification as a series of if-true, if-false branches. You can also output the tree more compactly, as below:
from sklearn.tree import export_text r = export_text(clf, feature_names=iris['feature_names']) print(r)
|--- petal length (cm) <= 2.45 | |--- class: setosa |--- petal length (cm) > 2.45 | |--- petal width (cm) <= 1.75 | | |--- sepal length (cm) <= 4.95 | | | |--- class: virginica | | |--- sepal length (cm) > 4.95 | | | |--- sepal length (cm) <= 7.05 | | | | |--- petal length (cm) <= 4.95 | | | | | |--- class: versicolor | | | | |--- petal length (cm) > 4.95 | | | | | |--- sepal width (cm) <= 2.45 | | | | | | |--- class: virginica | | | | | |--- sepal width (cm) > 2.45 | | | | | | |--- class: versicolor | | | |--- sepal length (cm) > 7.05 | | | | |--- class: virginica | |--- petal width (cm) > 1.75 | | |--- class: virginica
#Accuracy score on training data. This score is the "subset accuracy", or the % of samples that have ALL their labels classified correctly. It is a strict metric. print('Score, Training, dtc: ', clf.score(X_train,Y_train)) print('Score, Testing, dtc: ', clf.score(X_test,Y_test))
Score, Training, dtc: 1.0 Score, Testing, dtc: 0.8947368421052632
Note that the decision tree essentially overfit the training data, giving a performance score of 1.0. The score on the testing data is lower at 89%.