The conflict in minimizing two sources of error: 'Bias' (error from oversimplifying the model) and 'Variance' (error from excessive sensitivity to small fluctuations in the training set). A good model must find the sweet spot between underfitting (high bias) and overfitting (high variance).
A core theorem in statistical learning theory.
Central to understanding model performance; it explains why more complex models aren't always better.