[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models #900
Labels
AI-Chatbots
Topics related to advanced chatbot platforms integrating multiple AI models
Algorithms
Sorting, Learning or Classifying. All algorithms go here.
llm
Large Language Models
llm-evaluation
Evaluating Large Language Models performance and behavior through human-written evaluation sets
llm-experiments
experiments with large language models
Papers
Research papers
prompt-engineering
Developing and optimizing prompts to efficiently use language models for various applications and re
Research
personal research notes for a topic
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Snippet
"We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier."
Full Text
[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as: arXiv:2201.11903 [cs.CL]
(or arXiv:2201.11903v6 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2201.11903
Suggested labels
None
The text was updated successfully, but these errors were encountered: