Plain meaning
Start with the shortest useful explanation before going deeper.
A prompting technique or model-native capability where the LLM produces intermediate reasoning steps before arriving at a final answer, improving accuracy on multi-step problems. Originally a prompting strategy ('think step by step'), CoT is now built directly into reasoning models like o1 and DeepSeek-R1 as an internal process. When combining CoT with structured output, developers should place reasoning fields before answer fields to avoid bypassing the reasoning process.