Chain of Thought is a prompting technique that encourages the model to break down a complex problem into intermediate steps and explain its reasoning before giving the final answer. This often improves accuracy on logic and math tasks.
Popularized by Google Research in 2022 as a method to improve LLM reasoning performance.
Widely used in prompt engineering to tackle complex reasoning problems that zero-shot prompting fails at.