Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models #900

Open
1 task
ShellLM opened this issue Aug 20, 2024 · 1 comment
Open
1 task
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets llm-experiments experiments with large language models Papers Research papers prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re Research personal research notes for a topic

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Aug 20, 2024

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Snippet

"We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier."

Full Text

[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)

Cite as: arXiv:2201.11903 [cs.CL]
(or arXiv:2201.11903v6 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2201.11903

Suggested labels

None

@ShellLM ShellLM added AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-experiments experiments with large language models Papers Research papers Research personal research notes for a topic labels Aug 20, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Aug 20, 2024

Related content

#657 similarity score: 0.86
#823 similarity score: 0.85
#815 similarity score: 0.84
#238 similarity score: 0.84
#546 similarity score: 0.83
#684 similarity score: 0.83

@irthomasthomas irthomasthomas added prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets labels Aug 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets llm-experiments experiments with large language models Papers Research papers prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re Research personal research notes for a topic
Projects
None yet
Development

No branches or pull requests

2 participants