You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, each external variable is a single-value Pedersen commitment: V = v*B + b*B_blinding. However, some applications (e.g. ML) may need to commit to whole vectors of N values and using N pedersen commitments would blow up the size of the proof needlessly.
Idea
(Warning: this is a very quick midnight draft, without much thought put into it.)
Consider the above equation. The vector v could be split in subvectors committed with orthogonal generators B_i as vector Pedersen commitments. The rest of the protocol stays the same and the change is backwards compatible with single-value commitments.
The text was updated successfully, but these errors were encountered:
I have a design for this laying around, and a fork of this crate with it implemented, and proofs. Would you mind sending me an email? We're still pushing for publication.
Problem
Currently, each external variable is a single-value Pedersen commitment:
V = v*B + b*B_blinding
. However, some applications (e.g. ML) may need to commit to whole vectors of N values and using N pedersen commitments would blow up the size of the proof needlessly.Idea
(Warning: this is a very quick midnight draft, without much thought put into it.)
Consider the above equation. The vector v could be split in subvectors committed with orthogonal generators
B_i
as vector Pedersen commitments. The rest of the protocol stays the same and the change is backwards compatible with single-value commitments.The text was updated successfully, but these errors were encountered: