A sensitivity study of bias and variance of k-fold cross-validation in prediction error estimation
Ikusi/ Ireki
Data
2009Egilea
Rodríguez Fernández, Juan Diego
Pérez Martínez, Aritz
Lozano Alonso, José Antonio
Laburpena
In the machine learning field the performance of a classifier is usually measured in terms of prediction error. In most real-world problems, the error cannot be exactly calculated and it must be estimated. Therefore, it’s important to choose an appropriate estimator of the error.
This paper analyzes the statistical properties (bias and variance) of the k-fold cross-validation classification error estimator (k-cv). Our main contribution is a novel theoretical decomposition of the variance of the k-cv considering its sources of variance: sensitivity to changes in the training set and sensitivity to changes in the folds. The paper also compares the bias and variance of the estimator for different values of k. The empirical study has been performed in artificial domains because they allow the exact computation of the implied quantities and we can specify rigorously the conditions of experimentation. The empirical study has been performed for two different classifiers (naïve Bayes and nearest neighbor), different number of folds (2, 5, 10, n) and sample sizes, and training sets coming from assorted probability distributions.