## Some preliminaries

If you have not already, recommend reviewing the following posts in order to reproduce the results shown below.

This post is created in RStudio using R notebook format. For an overview of R notebook markdowns, a good resource is: https://rmarkdown.rstudio.com/authoring_basics.html

## Background

In this example we develop a basic regression model to explain the variation in the compressive strength of concrete. Concrete is the literal building block that has shaped modern living. Figuring out what makes it stronger is a very important topic in construction.

We hypothesize that the compressive strength of concrete is a function of the age and the mixture of ingredients.

The data that we use in this example can be downloaded here: https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength

## Modeling

The process of linear regression modeling broadly involves 1) exploring the data, 2) variable selection 3) estimating the parameters of the model, and 4) evaluating the model. And not necessarily in that order! I illustrate a few of these steps below.

### Data Exploration

First, we read the data in. Notice the “fileEncoding” attribute. I had to include it because we were reading in a pesky hidden character at the start of the file.

dat = read.csv("D:/Data/PredictiveModeler/UCI_Data/ConcreteCompressive/Concrete_Data.csv", header = TRUE, fileEncoding="UTF-8-BOM")

We can use head(dat) command to display & inspect the top rows of the data. You can click on button on the top-right to show more columns. The variable that we wish to predict/estimate/explain the variation in, is the Concrete Compressive Strength, or “CCS” (i.e. the last column).

We check the number of rows and columns in the data, as well as how they are distributed using the commands below.

nrow(dat) #number of rows

[1] 1030
ncol(dat) #number of columns
[1] 9
summary(dat) #min / mean / median etc.
   C1.Cement     C2.Blast.Furnace.Slag   C3.Fly.Ash        C4.Water     C5.Superplasticizer C6.Coarse.Aggregate
Min.   :102.0   Min.   :  0.0         Min.   :  0.00   Min.   :121.8   Min.   : 0.000      Min.   : 801.0
1st Qu.:192.4   1st Qu.:  0.0         1st Qu.:  0.00   1st Qu.:164.9   1st Qu.: 0.000      1st Qu.: 932.0
Median :272.9   Median : 22.0         Median :  0.00   Median :185.0   Median : 6.400      Median : 968.0
Mean   :281.2   Mean   : 73.9         Mean   : 54.19   Mean   :181.6   Mean   : 6.205      Mean   : 972.9
3rd Qu.:350.0   3rd Qu.:142.9         3rd Qu.:118.30   3rd Qu.:192.0   3rd Qu.:10.200      3rd Qu.:1029.4
Max.   :540.0   Max.   :359.4         Max.   :200.10   Max.   :247.0   Max.   :32.200      Max.   :1145.0
C7.Fine.Aggregate      Age              CCS
Min.   :594.0     Min.   :  1.00   Min.   : 2.33
1st Qu.:731.0     1st Qu.:  7.00   1st Qu.:23.71
Median :779.5     Median : 28.00   Median :34.45
Mean   :773.6     Mean   : 45.66   Mean   :35.82
3rd Qu.:824.0     3rd Qu.: 56.00   3rd Qu.:46.13
Max.   :992.6     Max.   :365.00   Max.   :82.60  

One of the first things to do is to see how our dependent variable (CCS) is distributed. We can create a quick histogram using the command: hist(dat$CCS). This isn’t a test of anything, but just helps us understand the range of our dependent values, and how those values are distributed along that range. Next, we wish to see how our independent variables are correlated with our dependent variable, CCS. Strong positive or negative correlations with the dependent variable suggest that the independent variable might be important in terms of explaining the dependent variable’s variation. Strong correlations amongst the independent variables suggest that multicollinearity (MC) might be an issue in the modeling. Technically, multicollinearity implies that an independent variable might be predicted linearly by one or more other indepdent variables to a high degree of accuracy. MC is an issue because it makes the coefficient estimates highly dependent upon which variables are included in the model and makes it difficult to properly interpret them (while not increasing the predictive power of the model in proportion to the disadvantages it causes!)*. We use a new library called “psych” in order to create these plots, as below. The code can be edited to see how the variables are distributed and how they are correlated with each other. If you do not have the library installed, you will need to do so by going to Tools > Install Packages… in your RStudio program. *For more information: https://newonlinecourses.science.psu.edu/stat501/node/346/ library(psych) pairs.panels(dat[c(9,1,7:8)], method = "pearson", # correlation method hist.col = "#00AFBB", density = TRUE, # show density plots ellipses = TRUE # show correlation ellipses ) ### Key Assumptions: Linear Regression The key assumptions of linear regression that we want to test include: 1. Linearity & Additivity: An assumption that the dependent variable is a linear and additive function of the independent variables. 2. Independence of errors (residuals): The errors are not correlated. 3. Constant variance of errors (also called by a needlessly complicated name: homoscedasticity): The variance of the residuals should be approximately equal across the range of the dependent variable (CCS in our case) 4. Errors are distributed normally: Generally less important than the assumptions above, we ideally want to see the residuals distribtued normally. ### Functional Form We are skipping this step here, as we are assuming a linear relationship between the independent variables and the dependent variable. I will make a post later on how to estimate a functional form given data. ### Parameter Estimation Next, we fit a linear model using all of the independent variables that we have. We can do so using the following command: mfull=lm(CCS~.,data=dat) We call our model “mfull” (or full model). We regress the dependent variable CCS on all other variables using the “~.” command. We can also select certain variables using “CCS~Age+C1.Cement” etc. Next, we display a summary of the fitted model. summary(mfull)   Call: lm(formula = CCS ~ ., data = dat) Residuals: Min 1Q Median 3Q Max -28.654 -6.302 0.703 6.569 34.450 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -23.331214 26.585504 -0.878 0.380372 C1.Cement 0.119804 0.008489 14.113 < 2e-16 *** C2.Blast.Furnace.Slag 0.103866 0.010136 10.247 < 2e-16 *** C3.Fly.Ash 0.087934 0.012583 6.988 5.02e-12 *** C4.Water -0.149918 0.040177 -3.731 0.000201 *** C5.Superplasticizer 0.292225 0.093424 3.128 0.001810 ** C6.Coarse.Aggregate 0.018086 0.009392 1.926 0.054425 . C7.Fine.Aggregate 0.020190 0.010702 1.887 0.059491 . Age 0.114222 0.005427 21.046 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 10.4 on 1021 degrees of freedom Multiple R-squared: 0.6155, Adjusted R-squared: 0.6125 F-statistic: 204.3 on 8 and 1021 DF, p-value: < 2.2e-16 Here are a few key observations from the linear regression summary: #### R-Square We have an R-square of 61.55%, and an adjusted R-Square of about the same. This is good, because the adjusted R-Square adjusts the metric for the number of predictors in the model. The adjusted R-Square is always lower than R-Square, and a big difference hints that the model might have too many predictors than is required, and the following section on variable selection becomes more important! More information: #### Overall F-statistic / p-value The F-test in a multivariable linear regression helps us determine whether any of the independent variables is significant. Note that it does not tell us that all or most of the variables are significant, for that we have to inspect other diagnostics (see below). The overall F-stat produces a very tiny p-value, so we can reject the null hypotheses that all of the coefficients are in fact 0 (i.e. no variable is significant). Generally speaking, we want this value to be below 0.05. More information: #### Residual Standard Error The model has a Residual Standard Error (RSE) of 10.4. This, along with the R-square, is a measure of the goodness of fit of the model.The Standard Error (SE) tells us the average error between the data points and the fitted line. It is calculated as: $RSE = \hat{\sigma} = \sqrt{\frac{\Sigma_{i=1}^{n}{(Y_i -\hat{Y_i})^2}} {n-k-1} }$ Where $$k$$ is the number of predictors in the model and $$n$$ is the number of observations. In the case of our model, an RSE of 10.4 is telling us that on average the difference between the reported data and fitted line is 10.4. If the residuals are distributed normally, then about two-thirds of the errors are in the $$\pm10.4$$ range and about 95% of the errors are in the $$\pm20.8$$ range. #### Variable significance tests In the summary output above, we have for each variable: 1) the coefficient estimate, 2) the standard error, 3) the t-value, and 4) the p-value. If the p-value is small enough (e.g. less than 0.05), then we tend to reject the null hypothesis that the coefficient is 0 (thereby accepting that it is something other than 0). This doesn’t tell us that the numerical value of the coefficient we have is the correct value (hint: nothing tells us that!), but at least tells us that variable is significant enough to warrant inclusion in our linear model. In our model the p-values for all eight of our predictors are small enough. The p-value for the intercept term is dubious, but it is a constant. We keep it in. The p-values for C6.Coarse.Aggregate and C7.Fine.Aggregate variables is a bit dodgy too…we stick with these for now. #### Adding fitted values and prediction intervals We can add the fitted values to our data set and also include the prediction interval. The prediction interval tells us that there is a 95% chance (according to our model, which could be wrong!) that the next data point corresponding with the given indepdendent variable values will lie in the given range. We use the following command to create a new data frame that includes the prediction and intervals. dat_pred=cbind(dat, predict(mfull,dat,interval="prediction")) We can then explore the new data frame using head(dat_pred). Click on the top right arrow in below to navigate to the newly added columns of “fit”, which is the prediction, and the lower and upper prediction interval bounds lwr/upr. We can explore the model fit visually by inspecting a series of plots that are available after the linear regression model is fit. For example, plot(mfull,1) produces a graph of the residuals vs the fitted values. Ideally, we are looking for a straight horizontal line around 0. This tests for non-linearity in the data and whether the variance of the residuals is constant (i.e. homoscedasticity). The way to test the former is to see a straight line and the latter to see the residuals randomly distributed above and below the line (and not increasing or decreasing with the fitted values). We don’t quite see a straight light (which tells us there may be non-linearity in how CCS varies with the independent variables), and do see some change in the variance of the residuals (more tightly spread at lower and higher fitted values). However, the problems are not severe enough to discard our linear modeling, but it tells us that we may want to explore a non-linear model at some point if higher accuracy is required. The normal Q-Q plot shows whether the error (i.e. the residuals) are normally distributed. We want the plots to lie roughly on the line (which they do in this case). A scale-location plot shows whether the residuals are evenly spread across the range of predicted values. Cook’s distance helps to spot outliers. A data point having a large cook’s distance tells us that the data point has a strong influence on the predicted value. In the case below, data points 57, 225, and 611 have a large value. In order to see the rows above, we can use the command: dat_pred[57,] (for 57th row): The residuals vs. leverage plot helps us spot outliers, as well. There are important future topics to consider, such as variable selection & transformations. It is very important with a method like linear regression to keep the model as simple as possible. If a variable is not increasing the performance of the model, leaving it in can cause problems such as overfit or making it more difficult to interpret the model. ### Variable Selection Our goal here is to try and fit a linear model given a set of variables. However, how do we determine whether a) we should use all or a subset of the variables, and b) to use the variables as-is or to transform them before use. We are not going to tackle transformations in this post, however we will talk about step-wise regression as one method to create simpler models and to decide whether a more complex model is worth it. The following command fits models of increasing complexity (i.e. up to the full eight variables). The model measures errors for each model by running a 10-fold cross-validation (very useful as we avoid overfit from contaminating our results!). This example code is from the following, more detailed post: http://www.sthda.com/english/articles/37-model-selection-essentials-in-r/154-stepwise-regression-essentials-in-r/ library(caret) # Set seed for reproducibility set.seed(123) # Set up repeated k-fold cross-validation train.control = trainControl(method = "cv", number = 10) # Train the model step.model = train(CCS ~., data = dat, method = "leapBackward", tuneGrid = data.frame(nvmax = 1:8), trControl = train.control ) step.model$results

I can see from the table above that there isn’t a whole lot of difference in performance between the best 5-variable model and the best 8-variable model. Occam’s Razor* is a wonderful heuristic, and I’d take a simpler model over a more complicated one any day when it comes to a method like linear regression.

*More on Occam’s Razor: https://en.wikipedia.org/wiki/Occam%27s_razor

We can see which variables produce the best 1-variable model, 5-variable model, and so on, using:

summary(step.model$finalModel) summary(step.model$finalModel)

Subset selection object
8 Variables  (and intercept)
Forced in Forced out
C1.Cement                 FALSE      FALSE
C2.Blast.Furnace.Slag     FALSE      FALSE
C3.Fly.Ash                FALSE      FALSE
C4.Water                  FALSE      FALSE
C5.Superplasticizer       FALSE      FALSE
C6.Coarse.Aggregate       FALSE      FALSE
C7.Fine.Aggregate         FALSE      FALSE
Age                       FALSE      FALSE
1 subsets of each size up to 6
Selection Algorithm: backward
C1.Cement C2.Blast.Furnace.Slag C3.Fly.Ash C4.Water C5.Superplasticizer C6.Coarse.Aggregate
1  ( 1 ) "*"       " "                   " "        " "      " "                 " "
2  ( 1 ) "*"       " "                   " "        " "      " "                 " "
3  ( 1 ) "*"       " "                   " "        "*"      " "                 " "
4  ( 1 ) "*"       "*"                   " "        "*"      " "                 " "
5  ( 1 ) "*"       "*"                   "*"        "*"      " "                 " "
6  ( 1 ) "*"       "*"                   "*"        "*"      "*"                 " "
C7.Fine.Aggregate Age
1  ( 1 ) " "               " "
2  ( 1 ) " "               "*"
3  ( 1 ) " "               "*"
4  ( 1 ) " "               "*"
5  ( 1 ) " "               "*"
6  ( 1 ) " "               "*"

In this case, the best 5-variable model includes the following independent variables: C1, C2, C3, C4, and Age. We discard C6 and C7 (which, if you recall, also had the dodgy p-values!).

OK - that’s enough linear regression for me for this weekend! Till next time!

---
output: html_notebook
---

## Some preliminaries

If you have not already, recommend reviewing the following posts in order to reproduce the results shown below.

 - Installing Anaconda (https://predictivemodeler.com/2019/01/11/installing-anaconda/)
 - Installing RStudio (https://predictivemodeler.com/2018/12/03/installing-rstudio/)

This post is created in RStudio using R notebook format. For an overview of R notebook *markdowns*, a good resource is: https://rmarkdown.rstudio.com/authoring_basics.html 

## Background

In this example we develop a basic regression model to explain the variation in the compressive strength of concrete. Concrete is the literal building block that has shaped modern living. Figuring out what makes it stronger is a very important topic in construction. 

We hypothesize that the compressive strength of concrete is a function of the age and the mixture of ingredients. 

The data that we use in this example can be downloaded here: https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength

## Modeling

The process of linear regression modeling broadly involves 1) exploring the data, 2) variable selection 3) estimating the parameters of the model, and 4) evaluating the model. And not necessarily in that order! I illustrate a few of these steps below.

### Data Exploration

First, we read the data in. Notice the "fileEncoding" attribute. I had to include it because we were reading in a pesky hidden character at the start of the file.

```
dat = read.csv("D:/Data/PredictiveModeler/UCI_Data/ConcreteCompressive/Concrete_Data.csv", header = TRUE, fileEncoding="UTF-8-BOM")
```
We can use head(dat) command to display & inspect the top rows of the data. You can click on button on the top-right to show more columns. The variable that we wish to predict/estimate/explain the variation in, is the Concrete Compressive Strength, or "CCS" (i.e. the last column).

```{r}
head(dat)
```

We check the number of rows and columns in the data, as well as how they are distributed using the commands below.

```{r}
nrow(dat) #number of rows
ncol(dat) #number of columns
summary(dat) #min / mean / median etc.
```


One of the first things to do is to see how our dependent variable (CCS) is distributed. We can create a quick histogram using the command: hist(dat$CCS). This isn't a test of anything, but just helps us understand the range of our dependent values, and how those values are distributed along that range.

```{R}
hist(dat$CCS)
```

Next, we wish to see how our *independent* variables are correlated with our *dependent* variable, CCS. Strong positive or negative correlations with the dependent variable suggest that the independent variable might be important in terms of explaining the dependent variable's variation. Strong correlations amongst the independent variables suggest that *multicollinearity* (MC) might be an issue in the modeling. Technically, multicollinearity implies that an independent variable might be predicted linearly by one or more other indepdent variables to a high degree of accuracy. MC is an issue because it makes the coefficient estimates highly dependent upon which variables are included in the model and makes it difficult to properly interpret them (while not increasing the predictive power of the model in proportion to the disadvantages it causes!)*. 

We use a new library called "psych" in order to create these plots, as below. The code can be edited to see how the variables are distributed and how they are correlated with each other. If you do not have the library installed, you will need to do so by going to Tools > Install Packages... in your RStudio program.

*For more information: https://newonlinecourses.science.psu.edu/stat501/node/346/ 

```
library(psych)
pairs.panels(dat[c(9,1,7:8)], 
             method = "pearson", # correlation method
             hist.col = "#00AFBB",
             density = TRUE,  # show density plots
             ellipses = TRUE # show correlation ellipses
             )
```

```{R}
library(psych)
pairs.panels(dat[c(9,1,7:8)], 
             method = "pearson", # correlation method
             hist.col = "#00AFBB",
             density = TRUE,  # show density plots
             ellipses = TRUE # show correlation ellipses
             )
```


### Key Assumptions: Linear Regression

The key assumptions of linear regression that we want to test include:

1. Linearity & Additivity: An assumption that the dependent variable is a linear and additive function of the independent variables.
2. Independence of errors (residuals): The errors are not correlated.
3. Constant variance of errors (also called by a needlessly complicated name: homoscedasticity): The variance of the residuals should be approximately equal across the range of the dependent variable (CCS in our case)
4. Errors are distributed normally: Generally less important than the assumptions above, we ideally want to see the residuals distribtued normally.

### Functional Form

We are skipping this step here, as we are *assuming* a *linear* relationship between the independent variables and the dependent variable. I will make a post later on how to estimate a functional form given data.


### Parameter Estimation

Next, we fit a linear model using all of the *independent* variables that we have. We can do so using the following command:

```
mfull=lm(CCS~.,data=dat)
```

We call our model "mfull" (or full model). We regress the *dependent* variable CCS on all other variables using the "~." command. We can also select certain variables using "CCS~Age+C1.Cement" etc. 

Next, we display a summary of the fitted model.

```{r}
summary(mfull)
```

Here are a few key observations from the linear regression summary:

#### R-Square 

We have an R-square of 61.55%, and an adjusted R-Square of about the same. This is good, because the adjusted R-Square *adjusts* the metric for the number of predictors in the model. The adjusted R-Square is always lower than R-Square, and a big difference hints that the model might have too many predictors than is required, and the following section on variable selection becomes more important!

More information: 

* http://blog.minitab.com/blog/adventures-in-statistics-2/multiple-regession-analysis-use-adjusted-r-squared-and-predicted-r-squared-to-include-the-correct-number-of-variables
* http://thestatsgeek.com/2013/10/28/r-squared-and-adjusted-r-squared/


#### Overall F-statistic / p-value

The F-test in a multivariable linear regression helps us determine whether *any* of the independent variables is significant. Note that it does not tell us that *all* or *most* of the variables are significant, for that we have to inspect other diagnostics (see below).

The overall F-stat produces a very tiny p-value, so we can reject the *null hypotheses* that all of the coefficients are in fact 0 (i.e. no variable is significant). Generally speaking, we want this value to be below 0.05.

More information: 

* http://facweb.cs.depaul.edu/sjost/csc423/documents/f-test-reg.htm

#### Residual Standard Error

The model has a Residual Standard Error (RSE) of 10.4. This, along with the R-square, is a measure of the *goodness of fit* of the model.The Standard Error (SE) tells us the average error between the data points and the fitted line. It is calculated as:

$$
RSE = \hat{\sigma} = \sqrt{\frac{\Sigma_{i=1}^{n}{(Y_i -\hat{Y_i})^2}} {n-k-1} }
$$
Where $k$ is the number of predictors in the model and $n$ is the number of observations. In the case of our model, an RSE of 10.4 is telling us that on average the difference between the reported data and fitted line is 10.4. If the residuals are distributed normally, then about two-thirds of the errors are in the $\pm10.4$ range and about 95% of the errors are in the $\pm20.8$ range.

#### Variable significance tests

In the summary output above, we have for each variable: 1) the coefficient estimate, 2) the standard error, 3) the t-value, and 4) the p-value. If the p-value is small enough (e.g. less than 0.05), then we tend to reject the *null* hypothesis that the coefficient is 0 (thereby accepting that it is something other than 0). This doesn't tell us that the numerical value of the coefficient we have is **the correct** value (hint: nothing tells us that!), but at least tells us that variable is significant enough to warrant inclusion in our linear model.

In our model the p-values for all eight of our predictors are small enough. The p-value for the intercept term is dubious, but it is a constant. We keep it in. The p-values for C6.Coarse.Aggregate and C7.Fine.Aggregate variables is a bit dodgy too...we stick with these for now.

#### Adding fitted values and prediction intervals

We can add the fitted values to our data set and also include the prediction interval. The prediction interval tells us that there is a 95% chance (according to our model, which could be wrong!) that the next data point corresponding with the given indepdendent variable values will lie in the given range.

We use the following command to create a new data frame that includes the prediction and intervals.

```
dat_pred=cbind(dat, predict(mfull,dat,interval="prediction"))
```
We can then explore the new data frame using head(dat_pred). Click on the top right arrow in below to navigate to the newly added columns of "fit", which is the prediction, and the lower and upper prediction interval bounds lwr/upr.

```{r}
head(dat_pred)

```

We can explore the model fit visually by inspecting a series of plots that are available after the linear regression model is fit. For example, plot(mfull,1) produces a graph of the residuals vs the fitted values. Ideally, we are looking for a straight horizontal line around 0. This tests for non-linearity in the data and whether the variance of the residuals is constant (i.e. homoscedasticity). The way to test the former is to see a straight line and the latter to see the residuals randomly distributed above and below the line (and not increasing or decreasing with the fitted values). We don't quite see a straight light (which tells us there may be non-linearity in how CCS varies with the independent variables), and do see some change in the variance of the residuals (more tightly spread at lower and higher fitted values). However, the problems are not severe enough to discard our linear modeling, but it tells us that we may want to explore a non-linear model at some point if higher accuracy is required.

```{r}
plot(mfull,1)
```

The normal Q-Q plot shows whether the error (i.e. the *residuals*) are normally distributed. We want the plots to lie roughly on the line (which they do in this case).

```{r}
plot(mfull,2)
```

A scale-location plot shows whether the residuals are evenly spread across the range of predicted values.

```{r}
plot(mfull,3)
```

Cook's distance helps to spot outliers. A data point having a large cook's distance tells us that the data point has a strong influence on the predicted value. In the case below, data points 57, 225, and 611 have a large value. 

```{r}
plot(mfull,4)
```

In order to see the rows above, we can use the command: dat_pred[57,] (for 57th row): 

```{r}
dat_pred[57,]
```


The residuals vs. leverage plot helps us spot outliers, as well.

```{r}
plot(mfull,5)
```

There are important future topics to consider, such as variable selection & transformations. It is very important with a method like linear regression to keep the model as simple as possible. If a variable is not increasing the performance of the model, leaving it in can cause problems such as overfit or making it more difficult to interpret the model. 

### Variable Selection

Our goal here is to try and fit a linear model given a set of variables. However, how do we determine whether a) we should use all or a subset of the variables, and b) to use the variables as-is or to transform them before use. We are not going to tackle transformations in this post, however we will talk about step-wise regression as one method to create simpler models and to decide whether a more complex model is worth it.

The following command fits models of increasing complexity (i.e. up to the full eight variables). The model measures errors for each model by running a 10-fold cross-validation (very useful as we avoid overfit from contaminating our results!). 

This example code is from the following, more detailed post: http://www.sthda.com/english/articles/37-model-selection-essentials-in-r/154-stepwise-regression-essentials-in-r/

```
library(caret)
# Set seed for reproducibility
set.seed(123)
# Set up repeated k-fold cross-validation
train.control = trainControl(method = "cv", number = 10)
# Train the model
step.model = train(CCS ~., data = dat,
                    method = "leapBackward", 
                    tuneGrid = data.frame(nvmax = 1:8),
                    trControl = train.control
                    )
step.model$results
```

```{r}
library(caret)
# Set seed for reproducibility
set.seed(123)
# Set up repeated k-fold cross-validation
train.control = trainControl(method = "cv", number = 10)
# Train the model
step.model = train(CCS ~., data = dat,
                    method = "leapBackward", 
                    tuneGrid = data.frame(nvmax = 1:8),
                    trControl = train.control
                    )
step.model$results
```

I can see from the table above that there isn't a whole lot of difference in performance between the best 5-variable model and the best 8-variable model. Occam's Razor* is a wonderful heuristic, and I'd take a simpler model over a more complicated one any day when it comes to a method like linear regression. 

*More on Occam's Razor: https://en.wikipedia.org/wiki/Occam%27s_razor 

We can see which variables produce the best 1-variable model, 5-variable model, and so on, using:

```
summary(step.model$finalModel)
```

```{R}
summary(step.model$finalModel)
```

In this case, the best 5-variable model includes the following independent variables: C1, C2, C3, C4, and Age. We discard C6 and C7 (which, if you recall, also had the dodgy p-values!).

OK - that's enough linear regression for me for this weekend! Till next time!