Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About verification of the effectiveness of the method proposed in this paper #5

Open
qibao77 opened this issue Jan 3, 2025 · 2 comments

Comments

@qibao77
Copy link

qibao77 commented Jan 3, 2025

Since it's a verifiable problem with a known answer, which is better, the complex chain of thought (CoT) production method proposed in this paper or the complex CoT that the model completes based on the answer directly provided?

@jymChen
Copy link
Contributor

jymChen commented Jan 4, 2025

Hi @qibao77,
Thank you for your attention!

For this question, I would recommend the production method proposed in the paper, where the model independently attempts multiple times to find the correct solution. While effective, this approach can consume many computational resources or API quotas.

To address scenarios requiring extensive searches, our code extra provide the --efficient_search option in search_for_complex_reasoning_path.py. If the model reaches the maximum number of searchs without success, this option allows it to directly refine the reasoning path based on the provided answer. However, constructing a reasoning path based on the given answer may introduce biases, as the intermediate reasoning steps filled in by the model may potentially contain errors.

@qibao77
Copy link
Author

qibao77 commented Jan 6, 2025

Hi @qibao77, Thank you for your attention!

For this question, I would recommend the production method proposed in the paper, where the model independently attempts multiple times to find the correct solution. While effective, this approach can consume many computational resources or API quotas.

To address scenarios requiring extensive searches, our code extra provide the --efficient_search option in search_for_complex_reasoning_path.py. If the model reaches the maximum number of searchs without success, this option allows it to directly refine the reasoning path based on the provided answer. However, constructing a reasoning path based on the given answer may introduce biases, as the intermediate reasoning steps filled in by the model may potentially contain errors.

Thank you for your reply! You suppose that "intermediate reasoning steps filled in by the model may potentially contain errors", have you conducted some experiments to support it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants