We've looked at a range of topics involved with fitting a model to data. This began with the simplest of regression cases and determining criteria for an optimal model, which led us to mean squarred error. From there, we further examined overfitting and underfitting which motivated train test split and later, the bias variance tradeoff. Here, we synthesize many of these ideas into a new sampling, optimization meta-routine known as cross validation.
A common form of cross validation is known as K-folds. In this process, the dataset is partitioned into K equally sized groups. Each group is then used as a hold out test set while the remaining k-1 groups are used as a training set. This then produces K different models, one for each of the hold out test sets. These models can then be averaged (perhaps a weighted average based on their test set performance) in order to produce a finalized model.
This is also a very useful method for helping to determine the generalization of our models, or the anticipated difference between train and test errors for the model.
Write a function k-folds that splits a dataset into k evenly sized pieces. If the full dataset is not divisible by k, make the first few folds one larger then later ones.
def kfolds(data, k):
#Force data as pandas dataframe (optional but could be helpful)
#Be sure to account for the case where the dataset is not evenly divisible
return None #folds should be a list of subsets of data
- Split your dataset into 10 groups using your kfolds function above.
- Perform linear regression on each and calculate the training and test error.
- Create a simple bar chart to display the various train and test errors for each of the 10 folds.
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
df = pd.read_excel('movie_data_detailed_with_ols.xlsx')
X_feats = ['budget', 'imdbRating',
'Metascore', 'imdbVotes']
y_feat = 'domgross'
df.head()
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
budget | domgross | title | Response_Json | Year | imdbRating | Metascore | imdbVotes | Model | |
---|---|---|---|---|---|---|---|---|---|
0 | 13000000 | 25682380 | 21 & Over | 0 | 2008 | 6.8 | 48 | 206513 | 4.912759e+07 |
1 | 45658735 | 13414714 | Dredd 3D | 0 | 2012 | 0.0 | 0 | 0 | 2.267265e+05 |
2 | 20000000 | 53107035 | 12 Years a Slave | 0 | 2013 | 8.1 | 96 | 537525 | 1.626624e+08 |
3 | 61000000 | 75612460 | 2 Guns | 0 | 2013 | 6.7 | 55 | 173726 | 7.723381e+07 |
4 | 40000000 | 95020213 | 42 | 0 | 2013 | 7.5 | 62 | 74170 | 4.151958e+07 |
folds = kfolds(df, k=10)
#folds[0]
#folds[1]
# folds[8]
# folds[9]
def mse(residual_col):
# residual_col = pd.Series(residual_col)
return None
test_errs = []
train_errs = []
k=10
for n in range(k):
#Split into the train and test sets for this fold
train = None
test = None
#Fit Linear Regression Model
#Evaluate Train and Test Errors
#Plot Train Versus Test Errors for each of the 10 folds
What do you notice about the train and test errors?
#Your answer here
Write a function to randomly sort your dataset prior to cross validation.
Why might you want to do this?
#Your function here
#Your answer here