Chain of Thought is a prompting technique that encourages large language models to break down complex problems into intermediate reasoning steps. By explaining its thinking process before arriving at a final answer, the model significantly improves its performance on logic, math, and symbolic reasoning tasks. This technique is particularly effective on larger models (100B+ parameters).
Popularized by Google Research (Wei et al., 2022) as a method to unlock reasoning capabilities in large models.
A standard best practice in prompt engineering, often elicited by the phrase 'Let's think step by step'.