-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to use a statically generated ecmult context. #254
Conversation
Cool!. You need to update secp256k1_ecmult_gen_context_clone too. (I haven't run the code yet, but valgrind on the tests should have told you that you created a leak. |
Is |
@gmaxwell yup, fixed. @dcousens Tools like benchmarks are currently in src/. A related problem is that this patch currently does not work when cross compiling with autotools. Autoconf exports a HOSTCC variable, but Automake seems to have no easy way to mark an executable to use it. I'm not sure what standard procedure is here. |
Ping @theuni for cross-compilation help.
|
@tdaede Gonna fix travis? :) |
@sipa Thanks for the ping, will have a look. |
I reworked this yesterday so that it builds on the host using a "dumb" config that is universal and requires no dependencies. In practice, that means building without HAVE_CONFIG_H, since those values represent the target's config. I also swapped out the big char array for a direct creation of a secp256k1_ge_storage_t using SECP256K1_GE_STORAGE_CONST's as suggested by @gmaxwell, as that will alleviate endian issues. As far as I can tell, this should allow someone to target mipseb-linux from mingw-win64 (my typical farfetched cross-compile scenario). My branch is here: https://github.com/theuni/secp256k1/tree/static-init |
ACK |
I wonder what is the best solution: having contexts that don't require a lot of memory, or having a fully static complete context in the first place (allowing passing context = NULL pointers, or having a secp2561_get_static_context function). The latter solution also solves some of the multithreading issues that some applications have with building a context without good reason, but disables the ability to blind the signing context. |
I like the idea of context = NULL, I would use that. |
I don't like the idea of context = NULL because it leaves no room for the user to specify anything about the context, like whether it can do verification. Also, the linker can probably not eliminate the static context when your program doesn't use it, because that would involve the compiler proving that the context argument is never NULL. |
I think that the API should be the same, regardless of the implementation, so I don't like passing in NULL context pointers. This is also true of dyamically-generated but secretly-shared contexts. I don't plan to address that use case in this patch, but it should be pretty easy to add without changing the API later. |
@voisine I understand that maintaining a context in the caller is a serious complication for usage, but the problem here is that that model inherently means using the library in a suboptimal way (due to the inability to perform application randomness for blinding), so I'm still in the middle. |
@sipa, it's an extra complication, not sure if I'd call it serious, but would be nice to do without. What's the recommended practice with respect to blinding? I'm currently just using secp256k1_context_create. Should I also be using secp256k1_context_randomize? Is once per application launch sufficient, or is it desirable to re-randomize after each signature? Use separate randomized contexts for signing and verification? |
The obvious method for a shared context would just be a global variable holding the static context that is lazily initialized. Unfortunately, doing so requires threading primitives to be threadsafe. This current patch just addresses a static context stored in ROM. I don't think it's in scope to address the other feature of a global context here. |
Thomas: well I don't think I want the complexity of two mechanisms to
|
@voisine Well this library is experimental, and I would argue against using it for validation in any production system. That said, your question makes it clear that the intended use isn't clear. That should definitely be clarified in the header file and perhaps a README. I guess it should list this:
|
So, for the other use cases here (hidden context sharing) I don't really see any good way to implement them. Using a global variable to store a context if one has already been initialized can cause locking issues. In addition, calls to randomize while some other thread is signing would lead to very bad things. So I don't think there's any good way to deduplicate contexts behind the application's back. So basically, I think the current patch's implementation is fine (with room to add a static validation table later, when a slower and smaller table option is made). Also, to be clear, this patch does not reduce memory footprint. Rather, it moves the footprint from RAM to ROM. On a desktop machine, it only bloats the size of the library. |
@@ -95,6 +96,11 @@ AC_ARG_ENABLE(endomorphism, | |||
AS_HELP_STRING([--enable-endomorphism],[enable endomorphism (default is no)]), | |||
[use_endomorphism=$enableval], | |||
[use_endomorphism=no]) | |||
|
|||
AC_ARG_ENABLE(ecmult_static_context, | |||
AS_HELP_STRING([--enable-ecmult-static-context],[enable ecmult static context for signing (default is no)]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Static context is not really the right name - contexts are still dynamic, but some of their content is made static. How about --enable-static-precomputation ?
Feel like addressing my nits above, @tdaede? |
@sipa You might have a look at https://github.com/theuni/secp256k1/tree/static-init2. Don't bother looking at individual commits and keep in mind that it's just a rough POC. This makes the context (which really wouldn't be a context anymore, it could be renamed to something more apt) const, so that it's unaffected by blinding operations. Instead, a new blinding structure is passed in when needed. Additionally, the initial blinding value is static to avoid race conditions for applications creating static contexts/blinds at startup. As a result, NULL could be passed in place of the blind, in which case the static blind would be used. I believe this solves several issues at once:
If you're onboard with the concept, I'll clean it up and keep working. |
@theuni I don't understand what advantage you see from the caller now having two contexts they must carry around. Perhaps you expect them to create and destroy the blinding around each operation? If so this would somewhat undermine it; as it would lose the accumulation of unpredictability. As a random aside, please don't introduce any more "_t" named types in the interface. typedefs ending with _t are namespace reserved by posix. |
b3eb2e4
to
87972b2
Compare
@gmaxwell It's my understanding that the blinding should be randomized relatively rarely, roughly no more than a few times per day. It's very possible that I've misunderstood that. As for type_t, noted. Thanks. From my perspective, the advantage is flexibility. I think it's pretty easy to imagine, for example, an application that could be generating keys in one thread and signing transactions in another. In that case, it would be advantageous to share a constants-only context but maintain per-thread blindings. They could then be rotated infrequently but still independently, without the risk of a collision. It also solves @sipa's problem above, so a create_static_context() could be created. I know I mentioned this to you a while back and you weren't in favor, I just wanted to put it into code to see what it might look like. |
This vastly shrinks the size of the context required for signing on devices with memory-mapped Flash. Tables are generated by the new gen_context tool into a header.
@theuni Thanks for your work, but I think there are more applications for context that are harder to integrate with it, like modifying memory usage, setting callbacks for errors, ... |
No worries. I discussed with @gmaxwell after commenting, and it's clear now why that idea isn't very helpful. |
dcb2e3b variable signing precompute table (djb) Pull request description: This pull request gives an option to reduce the precomputed table size for the signing context (`ctx`) by setting `#define ECMULT_GEN_PREC_BITS [N_BITS]`. Motivation: Per #251 and #254, the static table can be reduced to 64kB. However, this is still too big for some of my embedded applications. Setting `#define ECMULT_GEN_PREC_BITS 2` produces a 32kB table at a tradeoff of about 75% of the signing speed. Not defining this value will default to the existing implementation of 4 bits. Statistics: ``` ECMULT_GEN_PREC_BITS = 1 Precomputed table size: 32kB ./bench_sign ecdsa_sign: min 195us / avg 200us / max 212us ECMULT_GEN_PREC_BITS = 2 Precomputed table size: 32kB ./bench_sign ecdsa_sign: min 119us / avg 126us / max 134us ECMULT_GEN_PREC_BITS = 4 (default) Precomputed table size: 64kB ./bench_sign ecdsa_sign: min 83.5us / avg 89.6us / max 95.3us ECMULT_GEN_PREC_BITS = 8 Precomputed table size: 512kB ./bench_sign ecdsa_sign: min 96.4us / avg 99.4us / max 104us ``` Only values of 2 and 4 make sense. 8 bits causes a larger table size with no increase in speed. 1 bit runs, actually, but does not reduce table size and is slower than 2 bits. ACKs for top commit: real-or-random: ACK dcb2e3b verified that all changes to the previous ACKed 1d26b27 were due to the rebase jonasnick: ACK dcb2e3b read the code and tested various configurations with valgrind Tree-SHA512: ed6f68ca23ffdc4b59d51525336b34b25521233537edbc74d32dfb3eafd8196419be17f01cbf10bd8d87ce745ce143085abc6034727f742163f7e5f13f26f56e
This vastly shrinks the size of the context required for signing on devices with
memory-mapped Flash.
Tables are generated by the new gen_context tool into a header.