Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NIP46 - update NIP44 calls #983

Merged
merged 3 commits into from
Feb 21, 2024

Conversation

monlovesmango
Copy link
Member

is it too late to update the NIP44 methods in NIP46? I removed getKey and made encrypt and decrypt take arrays of params so encryption and decryption can happen in bulk.

these changes reflect the decisions that were made in the NIP07 PR #940

https://github.com/monlovesmango/nips/blob/NIP46-update-NIP44-calls/46.md

@monlovesmango
Copy link
Member Author

@alexgleason can you expand on how you envision batching to be implemented for nip46? I am open to what you are suggesting but would like something a little more concrete to evaluate.

cc @staab @vitorpamplona @arthurfranca @erskingardner @fiatjaf

@alexgleason
Copy link
Member

Sure, here's what I propose:

  1. Leave kind 24133 alone, but deprecate it. Leave the structure of RPC calls alone.
  2. Introduce a kind 24135. It has the exact same semantics and structure as the old kind, but encrypted in its content is an array of RPC calls/responses.

That's it.

Here's an example request:

{
  "kind": 24135,
  "pubkey": "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86",
  "content": nip44([
    {
      "id": "1",
      "method": "nip44_decrypt",
      "params": [<third_party_pubkey>, "01abcdef..."]
    },
    {
      "id": "2",
      "method": "nip44_decrypt",
      "params": [<third_party_pubkey>, "09dcdf..."]
    },
    {
      "id": "3",
      "method": "sign_event",
      "params": [json_stringified(<{
          content: "Hello, I'm signing remotely",
          pubkey: "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
          // ...the rest of the event data
      }>)]
    },
  ]),
  "tags": [["p", "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52"]], // p-tags the remote user pubkey
}

Example response:

{
  "kind": 24135,
  "pubkey": "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
  "content": nip44([
    {
      "id": "1",
      "method": "nip44_decrypt",
      "success": true,
      "data": ["hello world"]
    },
    {
      "id": "2",
      "method": "nip44_decrypt",
      "success": false,
      "error": "Malformed encryption data"
    },
    {
      "id": "3",
      "method": "sign_event",
      "success": true,
      "data": [<signed-event>]
    },
  ]),
  "tags": [["p", "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86"]], // p-tags the remote user pubkey
}

The results don't have to be structured exactly this way, it should just give you an idea. I went with a result type similar to zod's safeParse, which is well understood and I think it's very good. But the main point is just that you put a list in, get a list out. Everything else is just semantic.

@monlovesmango
Copy link
Member Author

I like this a lot @alexgleason , thank you. let me update this pr.

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

I don't see how batching improves anything. What do we gain? Time checking the signatures of the encompassing event?

@alexgleason
Copy link
Member

@fiatjaf NIP-46 has overhead of encryption / network transfer / decryption / encryption / network transfer / decryption - in both directions.

Instead of trying to solve this on the individual function level, it makes more sense to solve it at the event level and keep the individual RPC calls atomic.

@vitorpamplona
Copy link
Collaborator

I don't see how batching improves anything. What do we gain? Time checking the signatures of the encompassing event?

Mostly the latency in decryption calls of multiple messages in chats.

Some relays might also block too many new events in the same second, thinking it is spam (e.g. when decrypting 10+ messages in the same chat screens).

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

Batching stuff will still give you overhead of encrypting a bigger blob, decrypting it, sending a big event and so on. The only thing you gain is that you only have to sign and check one signature instead of many.

Also this batching stuff is only for loading past chat history? Why not store these messages locally after they were decrypted once?

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

Mostly the latency in decryption calls of multiple messages in chats.

What about the latency of sending a big event with a ton of requests?

Some relays might also block too many new events in the same second

Aren't relays going to also block big encrypted events containing 10+ request/responses in them?

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

I think we can either accept the inefficiency of one request per decryption and then cache these things locally as an optimization or we can implement getConversationKey() and be maximally efficient. And both approaches could be done simultaneously.

The batching stuff feels very inelegant to me and a bad solution doesn't matter if it's in the same request or in multiple batched requests in the same event.

@alexgleason
Copy link
Member

@fiatjaf The question is not whether to batch or not. The question is, if we were to support batching, should it be done on an individual function level or at the whole event level? All the problems you describe are problems with the previously proposed nip44_decrypt function that would accept any number of arguments.

@vitorpamplona
Copy link
Collaborator

Yep, all of these ideas have tradeoffs. I kinda want to wait for a client to code one of these and give us some hard data on how much gain they actually provide.

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

All the problems you describe are problems with the previously proposed nip44_decrypt function that would accept any number of arguments.

Yes, they are. And we gain nothing here but then we add a breaking way of doing NIP-46 requests immediately after we had all agreed that we wouldn't break NIP-46 in @erskingardner's PR from last week? This is insane to me.

If we are going to break things let's at least break it right and use his proposal.

@alexgleason
Copy link
Member

Okay. Then add a new top-level RPC call called "batch" which batches multiple calls.

I proposed a new event kind to preserve backwards-compat. But if we want to force it to work the old way, it can look like this instead.

This to me is worse than just adding a new event kind, though? The idea about "breaking things" being a huge existential crisis is so overstated. Okay, so just make this a new NIP instead, and then that will help people with it psychologically?

{
  "kind": 24133,
  "pubkey": "eff37350d839ce3707332348af4549a96051bd695d3223af4aabce4993531d86",
  "content": nip44({
    "id": "0",
    "method": "batch",
    "params": [
      {
        "id": "1",
        "method": "nip44_decrypt",
        "params": [<third_party_pubkey>, "01abcdef..."]
      },
      {
        "id": "2",
        "method": "nip44_decrypt",
        "params": [<third_party_pubkey>, "09dcdf..."]
      },
      {
        "id": "3",
        "method": "sign_event",
        "params": [json_stringified(<{
            content: "Hello, I'm signing remotely",
            pubkey: "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52",
            // ...the rest of the event data
        }>)]
      },
    ]
  }),
  "tags": [["p", "fa984bd7dbb282f07e16e7ae87b26a2a7b9b90b7246a44771f0cf5ae58018f52"]], // p-tags the remote user pubkey
}

@fiatjaf
Copy link
Member

fiatjaf commented Feb 20, 2024

This to me is worse than just adding a new event kind, though? The idea about "breaking things" being a huge existential crisis is so overstated. Okay, so just make this a new NIP instead, and then that will help people with it psychologically?

No, it won't help to create a new "method" or a new NIP. It is not a psychological issue, it is a compatibility issue. I will ask that you read #1047 (comment) because I have made my point way too many times already in the past week.

@monlovesmango monlovesmango force-pushed the NIP46-update-NIP44-calls branch from cb17994 to 9c7ae62 Compare February 20, 2024 22:18
@monlovesmango
Copy link
Member Author

rebased and removed all batching stuff, just too much disagreement. pulling an @alexgleason :)

agree with what fiatjaf is saying about having duplicate standards and there doesn't seem to be a good way to implement batching that everyone will be happy with. this now also aligns with #1063 which is makes me happy.

I am coming to the realization that theres not really a difference in network latency between batched calls and atomic calls if you send all your atomic calls concurrently rather than sequentially.

@staab
Copy link
Member

staab commented Feb 20, 2024

Batching can be added in the future in a compatible way. If clients want to batch, they can check if the method exists, and use it or fall back. I'm happy with this.

@staab
Copy link
Member

staab commented Feb 20, 2024

I'm happy with this.

By "this" I mean not batching. I still don't like conversation keys.

@erskingardner
Copy link
Contributor

erskingardner commented Feb 21, 2024

The idea about "breaking things" being a huge existential crisis is so overstated

Just wanted to state that I wholeheartedly agree with this. This certainly won't always be the case, so we should embrace it while we can in order to clean/improve/change things when we learn through using them that there is a better way.

All the same, happy to have the basic methods back in the PR.

@fiatjaf
Copy link
Member

fiatjaf commented Feb 21, 2024

There is still disagreement about get_conversation_key.

@monlovesmango
Copy link
Member Author

I still don't like conversation keys.

me neither.

@vitorpamplona
Copy link
Collaborator

vitorpamplona commented Feb 21, 2024

Let's get rid of nip44_conversation_key and merge this. We can always add this method in the future if there is enough data to satisfy the need for it.

PS: I am seeing the need for a more generic deterministic key derivation from the private key (which is what the conversation key does behind the scenes). We could have deriveAndSign(derivationParams, ...), deriveAndNip44Encrypt(derivationParams, ...), deriveAndNip44Decrypt(derivationParams, ...). Private Zaps is one of the use cases for this.

@monlovesmango
Copy link
Member Author

monlovesmango commented Feb 21, 2024

done, just reverted it back to "potential future addition".

@staab staab merged commit 7995a5b into nostr-protocol:master Feb 21, 2024
| `nip44_encrypt` | `[<third_party_pubkey>, <plaintext_to_encrypt>]` | `<nip44_ciphertext>` |
| `nip44_decrypt` | `[<third_party_pubkey>, <nip44_ciphertext_to_decrypt>]` | `<plaintext>` |
| `nip44_get_conversation_key`| _Potential future addition_ | |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just remove it entirely. There's no point putting future plans in a nip. Either it's defined or it isn't.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, #1065

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants