Bagging is an ensemble meta-algorithm that improves stability and accuracy. It works by training multiple versions of the same model on different random subsets of the training data (bootstrapping) and then averaging their predictions (aggregating) to reduce variance.
Introduced by Leo Breiman in 1996, it laid the groundwork for the Random Forest algorithm.
Ideally suited for high-variance models like Decision Trees, preventing them from overfitting to specific noise in the training set.