Transfer Learning is a technique where a model developed for a task is reused as the starting point for a model on a second task. For example, a model trained to recognize objects in millions of generic images (like ImageNet) can be fine-tuned to recognize rare diseases in medical X-rays with much less data.
Gained massive popularity with the success of ImageNet pre-training in computer vision (c. 2012) and later ULMFiT/BERT in NLP.
The dominant paradigm in deep learning, enabling 'Foundation Models' that can be adapted to countless downstream tasks.