Least-to-most prompting decomposes complex reasoning into sequential simpler subproblems, enabling LLMs to achieve at least 99% accuracy on the SCAN compositional generalization benchmark with only 14 exemplars.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.AI 1years
2022 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
Least-to-most prompting decomposes complex reasoning into sequential simpler subproblems, enabling LLMs to achieve at least 99% accuracy on the SCAN compositional generalization benchmark with only 14 exemplars.