-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Shplonk prover/verifier implementation #301
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot. I think we can have an initial side-by-side implementation in this PR for review, and once we're happy with the status of the PR, we can add a last commit that moves the changes from shplonk to hyperkzg.
As long as we're taking this approach, would you mind adding shplonk to the PCS benchmarks?
src/provider/hyperkzg.rs
Outdated
@@ -61,18 +61,20 @@ where | |||
E::Fr: TranscriptReprTrait<E::G1>, | |||
E::G1Affine: TranscriptReprTrait<E::G1>, // TODO: this bound on DlogGroup is really unusable! | |||
{ | |||
fn compute_challenge( | |||
/// TODO: write doc | |||
pub fn compute_challenge( | |||
com: &[E::G1Affine], | |||
transcript: &mut impl TranscriptEngineTrait<NE>, | |||
) -> E::Fr { | |||
transcript.absorb(b"c", &com.to_vec().as_slice()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove to_vec().as_slice()
, the transcript already takes slices.
com: &[E::G1Affine], | ||
transcript: &mut impl TranscriptEngineTrait<NE>, | ||
) -> E::Fr { | ||
transcript.absorb(b"c", &com.to_vec().as_slice()); | ||
transcript.squeeze(b"c").unwrap() | ||
} | ||
|
||
/// TODO: write doc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the existing comment already :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so I do not focus on this only because we are going to merge Shplonk with HyperKZG eventually. The followup PR or "final" commit that you mentioned will for sure include these TODO resolvings
src/provider/hyperkzg.rs
Outdated
@@ -88,14 +90,16 @@ where | |||
transcript.squeeze(b"r").unwrap() | |||
} | |||
|
|||
fn batch_challenge_powers(q: E::Fr, k: usize) -> Vec<E::Fr> { | |||
/// TODO: write doc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inner comment should perhaps just be moved as a doc comment of this function. See also crate::spartan::powers
, which could easily replace this function and its uses. We just need to move that function somewhere more accessible (the math
module comes to mind).
src/provider/shplonk.rs
Outdated
<NE::CE as CommitmentEngineTrait<NE>>::commit(ck, &polys[i]) | ||
.comm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, you don't want to use to_affine()
in a loop. Consider using batch_normalize
to post process the projective points, see #290 for examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to do that, but can you please explain the reason? As batch_normalize
seems to iterate over vector of G1 elements and invoke to_affine
for each similarly:
fn batch_normalize(p: &[Self], q: &mut [Self::AffineRepr]) {
assert_eq!(p.len(), q.len());
for (p, q) in p.iter().zip(q.iter_mut()) {
*q = p.to_affine();
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch_normalize
is part of the Curve
trait, and what you're showing is the default implementation that kicks in if the implementer choses to not optimize it. In fact, in most cases, the implementer does implement something better, and so it does not work that way for the curves that interest us. In fact it's optimized in pasta:
https://github.com/zcash/pasta_curves/blob/df67e299e6644c035a213dbad369be279276919c/src/curves.rs#L173-L207
and for halo2curves:
https://github.com/privacy-scaling-explorations/halo2curves/blob/9fff22c5f72cc54fac1ef3a844e1072b08cfecdf/src/derive/curve.rs#L583-L614
E::Fr: Serialize + DeserializeOwned, | ||
E::G1Affine: Serialize + DeserializeOwned, | ||
E::G2Affine: Serialize + DeserializeOwned, | ||
E::G1: DlogGroup<ScalarExt = E::Fr, AffineExt = E::G1Affine>, | ||
<E::G1 as Group>::Base: TranscriptReprTrait<E::G1>, // Note: due to the move of the bound TranscriptReprTrait<G> on G::Base from Group to Engine | ||
E::Fr: PrimeFieldBits, // TODO due to use of gen_srs_for_testing, make optional | ||
E::Fr: TranscriptReprTrait<E::G1>, | ||
E::G1Affine: TranscriptReprTrait<E::G1>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's really a pity we have to go through this declaration slog 😦 The good news is those bounds are auto-satisfied (e.g. E::Fr : Serialize
comes through NE: NovaEngine<Scalar = E::Fr>
), which makes the type rather usable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, our trait bounds are indeed quite complicated, so I just copy-pasted them from hyperkzg with minimal adaptations to satisfy compiler I think
src/provider/shplonk.rs
Outdated
#[allow(clippy::needless_range_loop)] | ||
Pi.par_iter_mut().enumerate().for_each(|(j, Pi_j)| { | ||
*Pi_j = | ||
point[point.len() - i - 1] * (polys[i][2 * j + 1] - polys[i][2 * j]) + polys[i][2 * j]; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aka:
(0..Pi_len).into_par_iter().map(|j| {
point[ell - i - 1] * (polys[i][2 * j + 1] - polys[i][2 * j]) + polys[i][2 * j]
}).collect_into_vec(&mut Pi);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Indeed cleaner and doesn't require adding clippy exception
// K(x) = P(x) - Q(x) * D(a) - R(a), note that R(a) should be subtracted from a free term of polynomial | ||
let K_x = Self::compute_k_polynomial(&batched_Pi, &Q_x, &D, &R_x, a); | ||
|
||
// TODO: since this is a usual KZG10 we should use it as utility instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note the very important definition:
https://github.com/lurk-lab/arecibo/blob/7a5d7bf0e9be80b8b5c4d47bb99c67c2b058b35e/src/provider/non_hiding_kzg.rs#L224-L225
IOW, you're invoking this as a UVKZGPoly
, but really, you can use UniPoly
everywhere instead. I should perhaps not have made this type alias public.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is true for Zeromorph as well, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
src/provider/shplonk.rs
Outdated
let q = HyperKZG::<E, NE>::get_batch_challenge(&pi.evals, transcript); | ||
let q_powers = HyperKZG::<E, NE>::batch_challenge_powers(q, pi.comms.len()); | ||
|
||
let R_x = UVKZGPoly::new(pi.R_x.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same remark, this is a UniPoly
morally speaking.
src/provider/shplonk.rs
Outdated
let mut batched_eval = E::Fr::ZERO; | ||
evals_i | ||
.iter() | ||
.zip_eq(q_powers.iter()) | ||
.for_each(|(eval, q_i)| { | ||
batched_eval += *eval * q_i; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is UniPoly::ref_cast(evals_i).evaluate(q)
, eliminating the need for q_powers
.
src/provider/shplonk.rs
Outdated
let C_P = q_powers | ||
.iter() | ||
.zip_eq(pi.comms.iter()) | ||
.fold(E::G1::identity(), |acc, (q_i, C_i)| acc + *C_i * q_i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like pi_comms.iter().rlc(q)
I think.
My draft benchmarking experiments (with "hacked" parallelism - I'm looking forward review comments from @adr1anh about that) show that with Shplonk we should have slightly better prover and comparable verifier (to current HyperKZG):
Despite the MSM elimination at verifier side, in shplonk we still have heavy |
I have an upcoming branch that may help with that (https://github.com/huitseeker/arecibo/tree/rayon-parscan : warning, very WIP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM, great base to keep iterating!
This PR adds Shplonk-based optimisation to HyperKZG PCS. For simplifying the reviewing process and focusing on correctness at first, It is implemented as a separate PCS (one more implementation of
EvaluationEngineTrait
) and contains most of @adr1anh's suggestions kindly provided in both private discussions and during intermediate reviewing of draft version.Once this PR is merged, I'm planning to have some more follow-up PRs to solve #302, #304 and #303.