-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelization optimizations #105
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all these under the parallel feature flag?
Not yet. We can add parallel feature flag everywhere. The only drawback is that the code will be a little redundant. I can add a commit if we decide to make them under parallel flags. Btw, currently, we achieve 1-thread performance by command |
res[i] = data[i << 1] + (data[(i << 1) + 1] - data[i << 1]) * point; | ||
} | ||
} | ||
res.par_iter_mut().enumerate().for_each(|(i, x)| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we continue to make parallel dependent on nv?
List of optimizations on prover:
Rc
withArc
. (cherrypick from @bbuenz 's branch main...arcpariter)