Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: distribute public keys #123

Merged
merged 12 commits into from
Apr 25, 2023
89 changes: 72 additions & 17 deletions DEPLOY.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,91 @@
# Manually Deploying mpc-recovery to GCP

GCP Project ID: pagoda-discovery-platform-dev
Service account: mpc-recovery@pagoda-discovery-platform-dev.iam.gserviceaccount.com
## Requirements

First, if you don't have credentials, go to [here](https://console.cloud.google.com/iam-admin/serviceaccounts/details/106859519072057593233;edit=true/keys?project=pagoda-discovery-platform-dev) and generate a new one for yourself.
This guide assumes you have access to GCP console and the administrative ability to enable services, create service accounts and grant IAM roles if necessary.

Now, assuming you saved it as `mpc-recovery-creds.json` in the current working directory:
It is assumed that you have chosen a region to use throughout this guide. This can be any region, but we recommend something close to our leader node in `us-east1` if you are deploying production nodes. This region of your choosing will be referred to as `GCP_REGION`.

```bash
$ cat pagoda-discovery-platform-dev-92b300563d36.json | docker login -u _json_key --password-stdin https://us-east1-docker.pkg.dev
Make sure that:
* You have a GCP Project (its ID will be referred to as `GCP_PROJECT_ID` below, should look something like `pagoda-discovery-platform-dev`)
* `GCP_PROJECT_ID` has the following services enabled:
* `Artifact Registry`
* `Cloud Run`
* `Datastore` (should also be initialized with the default database)
* `Secret Manager`
* You have a service account dedicated to mpc-recovery (will be referred to as `GCP_SERVICE_ACCOUNT` below, should look something like `mpc-recovery@pagoda-discovery-platform-dev.iam.gserviceaccount.com`).
* `GCP_SERVICE_ACCOUNT` should have the following roles granted to it (change in `https://console.cloud.google.com/iam-admin/iam?project=<GCP_PROJECT_ID>`):
* `Artifact Registry Writer`
* `Cloud Datastore User`
* `Secret Manager Secret Accessor`
* `Cloud Run Admin` (TODO: might be able to downgrade to `Cloud Run Developer`)
* JSON service account keys for `GCP_SERVICE_ACCOUNT`. If you don't, then follow the steps below:
1. Go to the service account page (`https://console.cloud.google.com/iam-admin/serviceaccounts?project=<GCP_PROJECT_ID>`)
2. Select your `GCP_SERVICE_ACCOUNT` in the list
3. Open `KEYS` tab
4. Press `ADD KEY` and then `Create new key`.
5. Choose `JSON` and press `CREATE`.
6. Save the keys somewhere to your filesystem, we will refer to its location as `GCP_SERVICE_ACCOUNT_KEY_PATH`.

## Configuration

Your point of contact with Pagoda must have given you your Node ID (ask them if not). It is very important you use this specific ID for your node's configuration, we will refer to this value as `MPC_NODE_ID`.

[TODO]: <> (Change key serialization format to a more conventional format so that users can generate it outside of mpc-recovery)

You also need a Ed25519 key pair that you can generate by running `cargo run -- generate 1` in this directory. Grab JSON object after `Secret key share 0:`; it should look like this:
```json
{"public_key":{"curve":"ed25519","point":[120,153,87,73,144,228,107,221,163,76,41,132,123,208,73,71,110,235,204,191,174,106,225,69,38,145,165,76,132,201,55,152]},"expanded_private_key":{"prefix":{"curve":"ed25519","scalar":[180,110,118,232,35,24,127,100,6,137,244,195,8,154,150,22,214,43,134,73,234,67,255,249,99,157,120,6,163,88,178,12]},"private_key":{"curve":"ed25519","scalar":[160,85,170,73,186,103,158,30,156,142,160,162,253,246,210,214,173,162,39,244,145,241,58,148,63,211,218,241,11,70,235,89]}}}
```

This will log you into the GCP Artifact Repository.
Now save it to GCP Secret Manager under the name of your choosing (e.g. `mpc-recovery-key-prod`). This name will be referred to as `GCP_SM_KEY_NAME`.

## Uploading Docker Image

First, let's create a new repository in GCP Artifact Registry. Go to `https://console.cloud.google.com/artifacts?project=<GCP_PROJECT_ID>`, press `CREATE REPOSITORY` and follow the form to create a new repository with **Docker** format and **Standard** mode. Name can be anything we will refer to it as `GCP_ARTIFACT_REPO`.

Now, you need to log into the GCP Artifact Registry on your machine:

Build the mpc-recovery docker image like you usually would, but tag it with this image name:
```bash
$ cat <GCP_SERVICE_ACCOUNT_KEY_PATH> | docker login -u _json_key --password-stdin https://<GCP_REGION>-docker.pkg.dev
```

Build the mpc-recovery docker image from this folder and make sure to tag it with this image name:

```bash
$ docker build . -t us-east1-docker.pkg.dev/pagoda-discovery-platform-dev/mpc-recovery-tmp/mpc-recovery
$ docker build . -t <GCP_REGION>-docker.pkg.dev/<GCP_PROJECT_ID>/<GCP_ARTIFACT_REPO>/mpc-recovery
```

Push the image to GCP Artifact Registry:

```bash
$ docker push us-east1-docker.pkg.dev/pagoda-discovery-platform-dev/mpc-recovery-tmp/mpc-recovery
$ docker push <GCP_REGION>-docker.pkg.dev/<GCP_PROJECT_ID>/<GCP_ARTIFACT_REPO>/mpc-recovery
```

You can check that the image has been successfully uploaded [here](https://console.cloud.google.com/artifacts/docker/pagoda-discovery-platform-dev/us-east1/mpc-recovery-tmp?project=pagoda-discovery-platform-dev).
You can check that the image has been successfully uploaded on the GCP Artifact Registry dashboard.

## Running on Cloud Run

Now reset the VM instance:

```bash
$ gcloud compute instances reset mpc-recovery-tmp-0
```
Pick a name for your Cloud Run service, we will refer to it as `GCP_CLOUD_RUN_SERVICE`. For example `mpc-signer-pagoda-prod`.

Run:

```bash
$ gcloud run deploy <GCP_CLOUD_RUN_SERVICE> \
--image=<GCP_REGION>-docker.pkg.dev/<GCP_PROJECT_ID>/<GCP_ARTIFACT_REPO>/mpc-recovery \
--allow-unauthenticated \
--port=3000 \
--args=start-sign \
--service-account=<GCP_SERVICE_ACCOUNT> \
--cpu=2 \
--memory=2Gi \
--min-instances=1 \
--max-instances=1 \
--set-env-vars=MPC_RECOVERY_NODE_ID=<MPC_NODE_ID>,MPC_RECOVERY_WEB_PORT=3000,RUST_LOG=mpc_recovery=debug,MPC_RECOVERY_GCP_PROJECT_ID=<GCP_PROJECT_ID> \
--set-secrets=MPC_RECOVERY_SK_SHARE=<GCP_SM_KEY_NAME>:latest \
--no-cpu-throttling \
--region=<GCP_REGION> \
--project=<GCP_PROJECT_ID>
```

The API should be available shortly on `http://34.139.85.130:3000`.
If deploy ends successfully it will give you a Service URL, share it with your Pagoda point of contact.
4 changes: 0 additions & 4 deletions integration-tests/tests/docker/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ use bollard::{
service::{HostConfig, Ipam, PortBinding},
Docker,
};
use curv::elliptic::curves::{Ed25519, Point};
use futures::{lock::Mutex, StreamExt};
use hyper::{Body, Client, Method, Request, StatusCode, Uri};
use mpc_recovery::msg::{AddKeyRequest, AddKeyResponse, NewAccountRequest, NewAccountResponse};
Expand Down Expand Up @@ -285,7 +284,6 @@ impl SignNode {
docker: &Docker,
network: &str,
node_id: u64,
pk_set: &Vec<Point<Ed25519>>,
sk_share: &ExpandedKeyPair,
datastore_url: &str,
gcp_project_id: &str,
Expand All @@ -297,8 +295,6 @@ impl SignNode {
"start-sign".to_string(),
"--node-id".to_string(),
node_id.to_string(),
"--pk-set".to_string(),
serde_json::to_string(&pk_set)?,
"--sk-share".to_string(),
serde_json::to_string(&sk_share)?,
"--web-port".to_string(),
Expand Down
1 change: 0 additions & 1 deletion integration-tests/tests/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,6 @@ where
&docker,
NETWORK,
i as u64,
&pk_set,
share,
&datastore.address,
GCP_PROJECT_ID,
Expand Down
74 changes: 73 additions & 1 deletion mpc-recovery/src/leader_node/mod.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
use crate::key_recovery::get_user_recovery_pk;
use crate::msg::{AddKeyRequest, AddKeyResponse, NewAccountRequest, NewAccountResponse};
use crate::msg::{
AcceptNodePublicKeysRequest, AddKeyRequest, AddKeyResponse, NewAccountRequest,
NewAccountResponse,
};
use crate::oauth::{OAuthTokenVerifier, UniversalTokenVerifier};
use crate::relayer::error::RelayerError;
use crate::relayer::msg::RegisterAccountRequest;
Expand All @@ -10,6 +13,7 @@ use crate::transaction::{
};
use crate::{nar, NodeId};
use axum::{http::StatusCode, routing::post, Extension, Json, Router};
use curv::elliptic::curves::{Ed25519, Point};
use near_crypto::{ParseKeyError, PublicKey, SecretKey};
use near_primitives::account::id::ParseAccountError;
use near_primitives::types::AccountId;
Expand Down Expand Up @@ -76,6 +80,24 @@ pub async fn run(config: Config) {
pagoda_firebase_audience_id,
};

// Get keys from all sign nodes, and broadcast them out as a set.
let pk_set = match gather_sign_node_pks(&state).await {
Ok(pk_set) => pk_set,
Err(err) => {
tracing::error!("Unable to gather public keys: {err}");
return;
}
};
tracing::debug!(?pk_set, "Gathered public keys");
let messages = match broadcast_pk_set(&state, pk_set).await {
Ok(messages) => messages,
Err(err) => {
tracing::error!("Unable to broadcast public keys: {err}");
return;
}
};
tracing::debug!(?messages, "broadcasted public key statuses");

//TODO: not secure, allow only for testnet, whitelist endpoint etc. for mainnet
let cors_layer = tower_http::cors::CorsLayer::permissive();

Expand Down Expand Up @@ -439,6 +461,56 @@ async fn add_key<T: OAuthTokenVerifier>(
}
}
}

async fn gather_sign_node_pks(state: &LeaderState) -> anyhow::Result<Vec<Point<Ed25519>>> {
let fut = nar::retry_every(std::time::Duration::from_millis(250), || async {
let results: anyhow::Result<Vec<(usize, Point<Ed25519>)>> = crate::transaction::call(
&state.reqwest_client,
&state.sign_nodes,
"public_key_node",
(),
)
.await;
let mut results = match results {
Ok(results) => results,
Err(err) => {
tracing::debug!("failed to gather pk: {err}");
return Err(err);
}
};

results.sort_by_key(|(index, _)| *index);
let results: Vec<Point<Ed25519>> =
results.into_iter().map(|(_index, point)| point).collect();

anyhow::Result::Ok(results)
});

let results = tokio::time::timeout(std::time::Duration::from_secs(10), fut)
.await
.map_err(|_| anyhow::anyhow!("timeout gathering sign node pks"))??;
Ok(results)
}

async fn broadcast_pk_set(
state: &LeaderState,
pk_set: Vec<Point<Ed25519>>,
) -> anyhow::Result<Vec<String>> {
let request = AcceptNodePublicKeysRequest {
public_keys: pk_set,
};

let messages: Vec<String> = crate::transaction::call(
&state.reqwest_client,
&state.sign_nodes,
"accept_pk_set",
request,
)
.await?;

Ok(messages)
}

#[cfg(test)]
mod tests {
use super::*;
Expand Down
9 changes: 1 addition & 8 deletions mpc-recovery/src/main.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
use clap::Parser;
use curv::elliptic::curves::{Ed25519, Point};
use mpc_recovery::{gcp::GcpService, LeaderConfig};
use multi_party_eddsa::protocols::ExpandedKeyPair;
use near_primitives::types::AccountId;
Expand Down Expand Up @@ -61,9 +60,6 @@ enum Cli {
/// Node ID
#[arg(long, env("MPC_RECOVERY_NODE_ID"))]
node_id: u64,
/// Root public key
#[arg(long, env("MPC_RECOVERY_PK_SET"))]
pk_set: String,
/// Secret key share, will be pulled from GCP Secret Manager if omitted
#[arg(long, env("MPC_RECOVERY_SK_SHARE"))]
sk_share: Option<String>,
Expand Down Expand Up @@ -166,7 +162,6 @@ async fn main() -> anyhow::Result<()> {
}
Cli::StartSign {
node_id,
pk_set,
sk_share,
web_port,
gcp_project_id,
Expand All @@ -175,12 +170,10 @@ async fn main() -> anyhow::Result<()> {
let gcp_service = GcpService::new(gcp_project_id, gcp_datastore_url).await?;
let sk_share = load_sh_skare(&gcp_service, node_id, sk_share).await?;

// TODO put these in a better defined format
let pk_set: Vec<Point<Ed25519>> = serde_json::from_str(&pk_set).unwrap();
// TODO Import just the private key and derive the rest
let sk_share: ExpandedKeyPair = serde_json::from_str(&sk_share).unwrap();

mpc_recovery::run_sign_node(gcp_service, node_id, pk_set, sk_share, web_port).await;
mpc_recovery::run_sign_node(gcp_service, node_id, sk_share, web_port).await;
}
}

Expand Down
6 changes: 6 additions & 0 deletions mpc-recovery/src/msg.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
use curv::elliptic::curves::{Ed25519, Point};
use ed25519_dalek::Signature;
use serde::{Deserialize, Serialize};

Expand Down Expand Up @@ -77,6 +78,11 @@ pub struct SigShareRequest {
pub payload: Vec<u8>,
}

#[derive(Serialize, Deserialize, Debug)]
pub struct AcceptNodePublicKeysRequest {
pub public_keys: Vec<Point<Ed25519>>,
}

mod hex_sig_share {
use ed25519_dalek::Signature;
use serde::{Deserialize, Deserializer, Serializer};
Expand Down
11 changes: 11 additions & 0 deletions mpc-recovery/src/nar.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::Duration;

use near_crypto::PublicKey;
use near_jsonrpc_client::errors::{JsonRpcError, JsonRpcServerError};
Expand All @@ -23,6 +24,16 @@ use crate::relayer::error::RelayerError;

pub(crate) type CachedAccessKeyNonces = RwLock<HashMap<(AccountId, PublicKey), AtomicU64>>;

pub(crate) async fn retry_every<R, E, T, F>(interval: Duration, task: F) -> T::Output
where
F: FnMut() -> T,
T: core::future::Future<Output = core::result::Result<R, E>>,
{
let retry_strategy = std::iter::repeat_with(|| interval);
let task = Retry::spawn(retry_strategy, task);
task.await
}

pub(crate) async fn retry<R, E, T, F>(task: F) -> T::Output
where
F: FnMut() -> T,
Expand Down
Loading