A classification metric measuring the proportion of correct predictions (both True Positives and True Negatives) out of the total number of predictions. While intuitive, Accuracy can be misleading on imbalanced datasets (e.g., in fraud detection where 99.9% of transactions are legitimate, a model predicting 'legitimate' for everything achieves 99.9% accuracy but is useless). In such cases, metrics like Precision, Recall, and F1-Score are more appropriate.
A fundamental statistical measure derived from the Confusion Matrix.
The most common 'first-glance' metric for model performance, but often replaced by more robust metrics in professional applications.