-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Montgomery module and minor changes in Elliptic module #67
Conversation
Codecov Report
@@ Coverage Diff @@
## main #67 +/- ##
==========================================
+ Coverage 70.85% 71.36% +0.51%
==========================================
Files 22 23 +1
Lines 2131 2169 +38
Branches 499 507 +8
==========================================
+ Hits 1510 1548 +38
Misses 485 485
Partials 136 136
Continue to review full report at Codecov.
|
@LoupVaillant A quick note on the changed handling of the low order points in Montgomery perhaps? The main concern is whether |
I feel like you’re playing a dangerous game here. More to the point, I don’t think this is even needed. First, the Montgomery ladder has two properties: it’s an X-only ladder, that ignores the sign of the point, and it confuses points at zero and infinity. Your hack seems to alleviate the second point, though it would have to be checked more thoroughly. One disturbing point about that is the conditional, that makes your code not quite constant time any more (well, Python arithmetic is not constant time to begin with, but this is yet another thing to watch out for). The real question though, are you seriously going to represent low-order Montgomery points? I don’t think you will. Let’s go back to X25519. It generates points from the prime order subgroup. But there’s a bias. First, it only generates about half of them, but that doesn’t matter thanks to the sign being ignored anyway, as well as the exact range of the points being generated. Second, even if it generated all points, the probability to generate the point at zero or infinity is negligible. Third, it never generates those points. Long story short, dirty public keys will never be low order. They have a low order component, but we don’t care about separating it from the prime order one, we only care about representing their sum. Same goes for the birational equivalence: genuine public keys, dirty or not, won’t run into the birational exceptions. You only care about such if you’re converting attacker controlled input. When converting from Edwards to Montgomery this is safe, because the point comes from a scalar multiplication you control. And when converting from Montgomery to Edwards, you’re supposed to only use key you either generated, or explicitly trust by other means. I could have missed something, so I need to ask: what is this patch for? What goal is it trying to help achieve? |
The montgomery scalarmult is not intended to be used in Covert, and mostly it is included for completeness and as additional verification for the correctness of implementation of other parts (i.e. Ed25519 and Curve25519 are expected to be consistent). The special point handling came up in these tests. Prior to this patch we used y=u for -1, 0 and 1, as suggested by comments in sodium source code, which allowed for consistent conversions both ways but which seems wrong after further study of documentation. This of course is not a matter of interest to libraries only dealing with standard/random/dirty points, as you correctly point out. This module might be released as a stand-alone Python package at a later time, in particular when Covert no longer relies on it. Completeness including low order points and other special cases is a useful property. But yes, the Montgomery scalarmult could still be missing some cases where these problems would be hit. One oddity is the use of |
Improved low order point encoding that is now inconsistent with libsodium and monocypher but more logical and able to represent all low order points correctly. The point at infinity is encoded as u=-1 (mod p), which is not a curve point and has no birational mapping to Ed25519 anyway (division by zero), while everything else uses the standard mapping and encoding specified in RFCs. Similarly the Ed25519 neutral element (ZERO, equivalent to the point at infinity) cannot use birational equivalency as it too causes division by zero due to its y coordinate being 1.