-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NIP46 - update NIP44 calls #983
NIP46 - update NIP44 calls #983
Conversation
@alexgleason can you expand on how you envision batching to be implemented for nip46? I am open to what you are suggesting but would like something a little more concrete to evaluate. cc @staab @vitorpamplona @arthurfranca @erskingardner @fiatjaf |
Sure, here's what I propose:
That's it. Here's an example request: {
"kind": 24135,
"pubkey": "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86",
"content": nip44([
{
"id": "1",
"method": "nip44_decrypt",
"params": [<third_party_pubkey>, "01abcdef..."]
},
{
"id": "2",
"method": "nip44_decrypt",
"params": [<third_party_pubkey>, "09dcdf..."]
},
{
"id": "3",
"method": "sign_event",
"params": [json_stringified(<{
content: "Hello, I'm signing remotely",
pubkey: "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
// ...the rest of the event data
}>)]
},
]),
"tags": [["p", "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52"]], // p-tags the remote user pubkey
} Example response: {
"kind": 24135,
"pubkey": "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
"content": nip44([
{
"id": "1",
"method": "nip44_decrypt",
"success": true,
"data": ["hello world"]
},
{
"id": "2",
"method": "nip44_decrypt",
"success": false,
"error": "Malformed encryption data"
},
{
"id": "3",
"method": "sign_event",
"success": true,
"data": [<signed-event>]
},
]),
"tags": [["p", "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86"]], // p-tags the remote user pubkey
} The results don't have to be structured exactly this way, it should just give you an idea. I went with a result type similar to zod's |
I like this a lot @alexgleason , thank you. let me update this pr. |
I don't see how batching improves anything. What do we gain? Time checking the signatures of the encompassing event? |
@fiatjaf NIP-46 has overhead of encryption / network transfer / decryption / encryption / network transfer / decryption - in both directions. Instead of trying to solve this on the individual function level, it makes more sense to solve it at the event level and keep the individual RPC calls atomic. |
Mostly the latency in decryption calls of multiple messages in chats. Some relays might also block too many new events in the same second, thinking it is spam (e.g. when decrypting 10+ messages in the same chat screens). |
Batching stuff will still give you overhead of encrypting a bigger blob, decrypting it, sending a big event and so on. The only thing you gain is that you only have to sign and check one signature instead of many. Also this batching stuff is only for loading past chat history? Why not store these messages locally after they were decrypted once? |
What about the latency of sending a big event with a ton of requests?
Aren't relays going to also block big encrypted events containing 10+ request/responses in them? |
I think we can either accept the inefficiency of one request per decryption and then cache these things locally as an optimization or we can implement The batching stuff feels very inelegant to me and a bad solution doesn't matter if it's in the same request or in multiple batched requests in the same event. |
@fiatjaf The question is not whether to batch or not. The question is, if we were to support batching, should it be done on an individual function level or at the whole event level? All the problems you describe are problems with the previously proposed |
Yep, all of these ideas have tradeoffs. I kinda want to wait for a client to code one of these and give us some hard data on how much gain they actually provide. |
Yes, they are. And we gain nothing here but then we add a breaking way of doing NIP-46 requests immediately after we had all agreed that we wouldn't break NIP-46 in @erskingardner's PR from last week? This is insane to me. If we are going to break things let's at least break it right and use his proposal. |
Okay. Then add a new top-level RPC call called "batch" which batches multiple calls. I proposed a new event kind to preserve backwards-compat. But if we want to force it to work the old way, it can look like this instead. This to me is worse than just adding a new event kind, though? The idea about "breaking things" being a huge existential crisis is so overstated. Okay, so just make this a new NIP instead, and then that will help people with it psychologically? {
"kind": 24133,
"pubkey": "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86",
"content": nip44({
"id": "0",
"method": "batch",
"params": [
{
"id": "1",
"method": "nip44_decrypt",
"params": [<third_party_pubkey>, "01abcdef..."]
},
{
"id": "2",
"method": "nip44_decrypt",
"params": [<third_party_pubkey>, "09dcdf..."]
},
{
"id": "3",
"method": "sign_event",
"params": [json_stringified(<{
content: "Hello, I'm signing remotely",
pubkey: "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
// ...the rest of the event data
}>)]
},
]
}),
"tags": [["p", "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52"]], // p-tags the remote user pubkey
} |
No, it won't help to create a new "method" or a new NIP. It is not a psychological issue, it is a compatibility issue. I will ask that you read #1047 (comment) because I have made my point way too many times already in the past week. |
cb17994
to
9c7ae62
Compare
rebased and removed all batching stuff, just too much disagreement. pulling an @alexgleason :) agree with what fiatjaf is saying about having duplicate standards and there doesn't seem to be a good way to implement batching that everyone will be happy with. this now also aligns with #1063 which is makes me happy. I am coming to the realization that theres not really a difference in network latency between batched calls and atomic calls if you send all your atomic calls concurrently rather than sequentially. |
Batching can be added in the future in a compatible way. If clients want to batch, they can check if the method exists, and use it or fall back. I'm happy with this. |
By "this" I mean not batching. I still don't like conversation keys. |
Just wanted to state that I wholeheartedly agree with this. This certainly won't always be the case, so we should embrace it while we can in order to clean/improve/change things when we learn through using them that there is a better way. All the same, happy to have the basic methods back in the PR. |
There is still disagreement about |
me neither. |
Let's get rid of PS: I am seeing the need for a more generic deterministic key derivation from the private key (which is what the conversation key does behind the scenes). We could have |
done, just reverted it back to "potential future addition". |
| `nip44_encrypt` | `[<third_party_pubkey>, <plaintext_to_encrypt>]` | `<nip44_ciphertext>` | | ||
| `nip44_decrypt` | `[<third_party_pubkey>, <nip44_ciphertext_to_decrypt>]` | `<plaintext>` | | ||
| `nip44_get_conversation_key`| _Potential future addition_ | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just remove it entirely. There's no point putting future plans in a nip. Either it's defined or it isn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, #1065
is it too late to update the NIP44 methods in NIP46? I removed
getKey
and madeencrypt
anddecrypt
take arrays of params so encryption and decryption can happen in bulk.these changes reflect the decisions that were made in the NIP07 PR #940
https://github.com/monlovesmango/nips/blob/NIP46-update-NIP44-calls/46.md