Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testnet Deployment via CI/CD #396

Merged
merged 8 commits into from
Jul 11, 2022
Merged

Conversation

CMCDragonkai
Copy link
Member

@CMCDragonkai CMCDragonkai commented Jul 1, 2022

Description

This PR works on the integration:deployment in order to get PK deployed on testnet.polykey.io.

The release:deployment jobs will be done later, after mainnet is available. But first we will focus just the testnet.

See https://about.gitlab.com/blog/2021/04/09/demystifying-ci-cd-variables/ to understand how variables are inherited on gitlab.

All of our deployment will occur with shell scripts and usage of command line tools like aws and skopeo. No usage of terraform yet for specifying infrastructure resources. I actually think pulumi is a better idea overall for making infrastructure as code.

See: https://aws.amazon.com/blogs/aws/amazon-ec2-update-virtual-private-clouds-for-everyone/ regarding the default VPC and how it works and https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html.

See: https://www.opensourcerers.org/2020/11/16/container-images-multi-architecture-manifests-ids-digests-whats-behind/ for explanation of container image internals.

Issues Fixed

Tasks

  • 1. Setup AWS_* env variables for authenticating to ECS and ECR, for controlling the ECS for running the containers
  • 2. Created matrix-ai-polykey AWS bot account that manipulates ECS, ECR and nix cache (due to lack of better POLA, and token composition)
  • 3. Setup CONTAINER_REGISTRY and CONTAINER_REPOSITORY variables to point to ECR for hosting our container images, also REGISTRY_AUTH_FILE is used to authenticate skopeo to the registry
  • 4. Brought in awscli, jq, and skopeo as tools in nix-shell
  • 5. Create scripts/deploy-image.sh that uses skopeo to push the image up to the ECR
  • 6. Integrated scripts/deploy-image.sh to integration:deployment job
  • 7. Added scoped environment variables to js-polykey gitlab project for it to use the matrix-ai-polykey account and REGISTRY_AUTH_FILE for staging and production scoped jobs
  • 8. Created polykey-testnet cluster to serve as the testnet cluster on AWS
  • [ ] 9. Made AWS_* variables protected, even the matrix-ai-nix user account, this prevents the usage of our s3 cache on non-protected references, so any other user who submits a pull-request will need manual running of the CICD. We will figure out how to open the CICD to non-members of MatrixAi in the future after verifying software supply chain security. This will require an update to our gitlab-runner - This makes all non protected branches/PRS not capable of running CI/CD jobs, because our gitlab-runner will error out when AWS_ variables are not present
  • 10. Test out container image deployment on CI
  • 11. Specify Task Definition in our scripts so it can be automated
  • 12. Create scripts/deploy-service.sh that uses aws replace the container image used for the service polykey-testnet
  • 13. Integrate npm run deploy-service into integration:deployment - this was done with npm run, because it doesn't have anything to do with NPM, it's just a script
  • 14. Verify all firewall and network tasks on Use a single --port argument for authorize/revoke operations in EC2 aws/aws-cli#194
  • [ ] 15. Swap to using secret root keys based on Merge bools aws/aws-cli#285 - doing this after we have fixed several bugs on the testnet Remove httpretty as a test dep aws/aws-cli#403 Updating requests to 2.0.0. aws/aws-cli#398 trying to upload empty file fails with a NotImplemented error aws/aws-cli#399 and infrastructure issues
  • 16. Optimised the src filter in utils.nix to avoid bringing in unnecessary files into the nix src for nix-build

Final checklist

  • Domain specific tests
  • Full tests
  • Updated inline-comment documentation
  • Lint fixed
  • Squash and rebased
  • Sanity check the final build

@CMCDragonkai CMCDragonkai self-assigned this Jul 1, 2022
@CMCDragonkai CMCDragonkai marked this pull request as draft July 1, 2022 05:55
@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 1, 2022

I'm bringing in awscli and skopeo into shell.nix first. These 2 will be used to interact with AWS ECS and the ECR registry respectively.

This means new development environment variables to be set.

The skopeo is an alternative to using docker to push up container images. It's alot faster, and doesn't require a docker daemon that is necessary in aws/aws-cli#391.

Skopeo works like this:

skopeo --insecure-policy copy docker-archive:$(nix-build release.nix -A docker) docker://015248367786.dkr.ecr.ap-southeast-2.amazonaws.com

Haven't run the above yet, and that is the AWS ECR we have.

As for AWS, once the container image is uploaded, we have to trigger an update of the ECS service:

aws ecs update-service \
  --cluster polykey \
  --service polykey \
  --desired-count 1 \
  --force-new-deployment

I think I will also create a new cluster like polykey-testnet and polykey-mainnet to separate the clusters from each other.

@CMCDragonkai
Copy link
Member Author

With the new 22.05 revision that we are on with NixOS we can finally use the docker build again. So this should make some testing easier. The pkgs.nix can remain the same.

@CMCDragonkai
Copy link
Member Author

Funny how aws/aws-cli#194 was opened last year, and it took an entire year of work to get to this point again.

@CMCDragonkai
Copy link
Member Author

The skopeo has several ways of authenticating to the ECR registry.

The first is that by default it will use the docker login, so if already logged in via docker, then it should work. This won't work for our CI/CD purposes, and I'd like to maintain the same way of authenticating between local development and in the CICD.

The other ways are through the command line parameter --dest-creds USERNAME[:PASSWORD]. This is not secure, as per #385 (comment).

So lastly there is --dest-authfile and the env variable REGISTRY_AUTH_FILE. Here we have to authenticate to create an auth file, then refer to it in the path. I think this is what we will go for, in particular REGISTRY_AUTH_FILE. But we will need to place the contents somewhere.

In our .env.example we can set this up REGISTRY_AUTH_FILE, but of course if not set, it will use whatever docker is logged in to.

@CMCDragonkai
Copy link
Member Author

The REGISTRY_AUTH_FILE points to a file path that has the same data format as DOCKER_AUTH_CONFIG.

Basically:

{
  "auths": {
    "015248367786.dkr.ecr.ap-southeast-2.amazonaws.com": {
      "auth": "..."
    }
  }
}

To actually get the ECR login we have to do this, we have to convert our AWS credentials to them:

# assume you have the `AWS_*` env variables set
aws ecr get-login-password --region ap-southeast-2 | skopeo login --username AWS --password-stdin 015248367786.dkr.ecr.ap-southeast-2.amazonaws.com --authfile=./tmp/auth.json

Notice that the username is AWS. Using --authfile places it in ./tmp/auth.json, although by default it would place it in $XDG_RUNTIME_DIR/containers/auth.json. The path to these files, can be set as REGISTRY_AUTH_FILE.

So this means, that in our .env.example, we will comment out the REGISTRY_AUTH_FILE since it's a bit complicated to setup, but it's a reminder to set that when using in CI/CD scenarios. In gitlab, we don't have to do all of this, we can just copy paste the full contents into REGISTRY_AUTH_FILE, the same way we do DOCKER_AUTH_CONFIG, but unlike DOCKER_AUTH_CONFIG, it's a file type variable.

@CMCDragonkai
Copy link
Member Author

Ok now that are authenticated to the ECR registry, and we also have the $REGISTRY environment variable, we are going to do a nix-build ./release.nix -A docker and test push this up to the ECR registry using skopeo.

If this works, we will reify this command into one of our scripts. Optionally it can be accessible with npm run * scripts.

I think right now there's a number of relevant scripts:

  • Scripts in scripts/ - various custom tools, may be used by npm run, like proto-generate.sh and docker-run.sh
  • Scripts in npm run configured in package.json - most scripts/ will be replicated here, this aligns scripts with npm packages, and allows scripts execution in various OS platforms
  • All the nix- tooling - these sit outside the scripts and npm, and represent nix specific tooling for development in nix environments
  • Any other tooling brought in via shell.nix - other development tooling like skopeo and aws

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 1, 2022

I've updated the .env.example, assuming @tegefaulkes and @emmacasolin follow the same way of copying it to .env (redo this every time the .env.example is updated) then editing all the required variables (the ones that are not commented out), then you are now able to use skopeo commands like these:

skopeo list-tags docker://$CONTAINER_REPOSITORY
skopeo inspect docker://$CONTAINER_REPOSITORY
skopeo inspect --config docker://$CONTAINER_REPOSITORY:latest

The CONTAINER_REPOSITORY and CONTAINER_REGISTRY is specified now.

# Container repository domain
CONTAINER_REGISTRY='015248367786.dkr.ecr.ap-southeast-2.amazonaws.com'

# Container name located on the registry
CONTAINER_REPOSITORY="$CONTAINER_REGISTRY/polykey"

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 1, 2022

I'm playing around with skopeo copy command, and come across this issue containers/skopeo#1699.

The skopeo copy by default ignores the tag and just uploads it as the latest tag on ECR.

The tag currently corresponds to the nix output hash:

tag specifies the tag of the resulting image. By default it’s null, which indicates that the nix output hash will be used as tag.

[nix-shell:~/Projects/js-polykey]$ ll result 
lrwxrwxrwx 1 cmcdragonkai operators 77 Jul  1 21:49 result -> /nix/store/rshk0jallj0pnnriby0f1lm022fibglp-docker-image-polykey-1.0.0.tar.gz

[nix-shell:~/Projects/js-polykey]$ docker load --input ./result
Loaded image: polykey-1.0.0:rshk0jallj0pnnriby0f1lm022fibglp

Which helps us connect the nix store output to the uploaded container images on ECR.

Right now in order to extract the tag we have to use either docker load or skopeo list-tags. Using docker load requires a daemon present, so instead, we use skopeo.

container_tag="$(skopeo list-tags docker-archive://$(nix-build release.nix -A docker) | jq -r '.Tags[0] | split(":")[1]')"

Which gives us rshk0jallj0pnnriby0f1lm022fibglp.

Which means we should also have jq as a command to be used.

Then we use:

# preserve the $container_tag
skopeo --insecure-policy copy docker-archive:$(nix-build release.nix -A docker) "docker://$CONTAINER_REPOSITORY:$container_tag"

# now set it to the latest as well
skopeo --insecure-policy copy "docker://$CONTAINER_REPOSITORY:$container_tag" "docker://$CONTAINER_REPOSITORY:latest"

Note that containers also have and image id. This image id is calculated separately based on the internal layer and maybe the roofs of the container image? Not sure. But it's possible to have a different nix output hash with the same image id (if the nix derivation changed, but the output image didn't). If nix used content addresses, this wouldn't be an issue...

@CMCDragonkai
Copy link
Member Author

The pushing to the ECR is now in the integration:deployment jobs.

These variables are now embedded in .gitlab-ci.yaml file:

  # Container service
  CONTAINER_REGISTRY: "015248367786.dkr.ecr.ap-southeast-2.amazonaws.com"
  CONTAINER_REPOSITORY: "$CONTAINER_REGISTRY/polykey"

AWS creds and REGISTRY_AUTH_FILE to go into gitlab.

Also added jq to the nix-shell too.

@CMCDragonkai
Copy link
Member Author

Due to the new environment variables necessary:

  • AWS_DEFAULT_REGION
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • REGISTRY_AUTH_FILE

I've created a new user on aws: matrix-ai-polykey. This will be given access to only the ECS cluster and ECR atm.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 2, 2022

After some reading, we have some solutions for the security of the capabilities passed down via environment variables (and dealing with software supply chain security).

The first idea is that gitlab supports scoped environment variables. This means it's possible to limit the injection of the environment variable to a specific environment scope.

This is done through way factors:

  1. The environment variable definition on the gitlab interface allows us to specify the environment scope. The default scope is * which means it is injected everywhere. However we are also limiting the variables to protected references, so this an extra level of protection. We can use the testnet and mainnet as environment scopes.
  2. The .gitlab-ci.yml file can specify that certain jobs have an environment scope. Jobs that have an environment scope can also specify additional metadata about a "deployment", this augments the CI interface with deployment information, this looks useful and interesting, so we can try this out.

So basically we can have integration:deployment job to have this additional configuration:

integration:deployment:
  # ... prexisting config
  environment:
    name: staging
    url: https://staging.example.com

The url is just an example, and the CI interface will actually show this on the deployments page: https://docs.gitlab.com/ee/ci/environments/index.html#view-environments-and-deployments. We don't actually have an HTTP URL necessarily for polykey, but I guess testnet.polykey.io will suffice for now. We could add a little dashboard there later which can show up when visiting via HTTP interface and can be useful for browsers.

Additionally the environment variable scope can be specified with wildcards. However if you need a variable that works for both mainnet and testnet scopes, how does this work? Perhaps you can do a little regex like mainnet|testnet? Or do you have to define 2 different variables?

If it is possible to define multiple variables but with different scopes, this help alleviate the problem of variable collision (where you end up unioning the capability permissions), such as when we need to deal with the nix cache, and also deal with ECS and ECR. Although I haven't tried yet.

EDIT: checked, it is in fact possible to have multiple variables with the same name as long as their scopes are different. Cannot confirm whether regex like mainnet|testnet works. See below image as to how scoped variables override env variables

env spec

One additional thing is that regarding the usage of yaml, I find that pulumi is a better idea overall, a domain specific language is superior that just yaml. The yaml is a DSL in this case, but it's more of a "container language", like using JSON but with special meaning given to specific keywords and structure. We could generate YAML using our typescript/javascript, and then embed meaning into the typescript via type signatures and object structures and composition. It would be a far superior language to write the yaml in, and you get linting and automatic syntax & type checking for free. At the end, it can be generated into yaml, and I guess that's where things like dhall is useful for too. But there's a problem here, and that's the mismatch and desynchronisation. I checked the linting of gitlab yaml files, and atm, it's a networked service, it's just not a tool you can just download. That means any build up of a DSL for the gitlab yaml (and ultimately CI/CD configuration) should really be something that is supported by gitlab officially, rather than a third party tool compiling to a moving target. This is really what the Architect language should have been.

In other news, regarding the security of chocolatey packages. Right now chocolatey is using packages provided by the chocolatey community. In particular the bill of materials include nodejs and python, although extra packages may be needed in the future. In that sense, its no greater or lesser secure than npm packages and nixpkgs. All rely on the community. Officially they recommend hosting your own packages and internalizing them to avoid network access, especially given that packages are not "pinned" in chocolatey unlike nixpkgs (and part of the reason why we like nixpkgs). This we would like to do simply to improve our CICD performance and to avoid 429 too many request rate limiting. But from a security perspective no matter what you're always going to be running trusted (ideally) but unverified code. This is not ideal, all signatures can do is reifying the trust chain, and that ultimately results in a chain of liability. But this is a ex post-facto security technique. Regardless of the trust chain (https://www.chainguard.dev/), a vulnerability means the damage is already done. Preventing damage ahead of time requires more than just trust. And this leads to Principle of least privilege, which is enforced through one of 2 ways:

  • Code sandboxing/isolation (putting a faraday cage around an untrusted code to isolate it from the host environment which has a loose ambient environment)
  • Correct by construction, where the foundations is computing is laid down secure one step at a time, thus an object capability system, and isolation is no longer something applied after a software is made, but isolation is the default from which software is constructed and expected to run.

Most security attempts are done through the first technique. Some form of isolation, whether by virtual machines, containerisation, vm isolation, network isolation, and even environment variable filtering with the env command or namespaces, and even the above technique of using environment scopes. The second technique is not practical due to the legacy of our software architecture in the industry.

The fundamental problem with technique one, is that everything starts as open, and are we trying to selectively trying to close things, this is privacy as an after-thought. This is doomed to failure, because it is fundamentally not scalable. The fundamental problem with technique two, is that it makes interoperability something that requires forethought, this is privacy by default.

Anyway about choco, we can remove the official chocolatey source https://community.chocolatey.org/packages like choco source remove --name="'chocolatey'", but then replace it with our own source, probably a directory (cached via gitlab), but we have to bootstrap this directory at the beginning by internalizing the source. We should also enforce hashes on the downloads with:

     --checksum, --downloadchecksum, --download-checksum=VALUE
     Download Checksum - a user provided checksum for downloaded resources 
       for the package. Overrides the package checksum (if it has one).  
       Defaults to empty. Available in 0.10.0+.

     --checksum64, --checksumx64, --downloadchecksumx64, --download-checksum-x64=VALUE
     Download Checksum 64bit - a user provided checksum for 64bit downloaded 
       resources for the package. Overrides the package 64-bit checksum (if it 
       has one). Defaults to same as Download Checksum. Available in 0.10.0+.

     --checksumtype, --checksum-type, --downloadchecksumtype, --download-checksum-type=VALUE
     Download Checksum Type - a user provided checksum type. Overrides the 
       package checksum type (if it has one). Used in conjunction with Download 
       Checksum. Available values are 'md5', 'sha1', 'sha256' or 'sha512'. 
       Defaults to 'md5'. Available in 0.10.0+.

     --checksumtype64, --checksumtypex64, --checksum-type-x64, --downloadchecksumtypex64, --download-checksum-type-x64=VALUE
     Download Checksum Type 64bit - a user provided checksum for 64bit 
       downloaded resources for the package. Overrides the package 64-bit 
       checksum (if it has one). Used in conjunction with Download Checksum 
       64bit. Available values are 'md5', 'sha1', 'sha256' or 'sha512'. 
       Defaults to same as Download Checksum Type. Available in 0.10.0+.

This is different from nixpkgs since nixpkgs hashes are specified by the package set already, all done via our nixpkgs-overlay.

Homebrew would need something similar.

Details on chocolatey usage will be documented further on our development wiki.

…r nix-build

Note that even though `/.*` was ignored, the `.env.example` is still in
the filtered source. This is because `nix-gitignore` appears to prepend
the additional ignores, and thus `!.env.example` in `.gitignore`
overrides the `/.*`.
@CMCDragonkai
Copy link
Member Author

Turns out the skopeo uses /var/tmp by default as its temporary directory. This does not exist in the gitlab-runner container image. We only have /tmp by default.

This can be overridden by --tmpdir, but I want to use TMPDIR env variable which is more universal.

Now the issue different platforms have different temporary locations. In Unix we use TMPDIR, but on windows, hey have $env:TEMP or $env:TMP.

We have a project specific temporary directory and that's in ./tmp which is created by default in nix-shell. Although some commands benefit from having that temporary directory already created.

So we can do something like:

variables:
  TMPDIR: "${CI_PROJECT_DIR}/tmp"
  TEMP: "${CI_PROJECT_DIR/tmp"
  TMP: "${CI_PROJECT_DIR}/tmp"

default:
  before_script:
    - mkdir -p "$TMPDIR"

Now on windows jobs, they must override the before_script into:

    before_script:
      - mkdir -Force "$env:TMPDIR"

I believe this should work, but I'm not sure if we should be using $TMPDIR or $env:TMPDIR.

@CMCDragonkai
Copy link
Member Author

So TMPDIR doesn't get recognised by skopeo. Fixed this in our deploy-image.sh script so that we have --tmpdir $TMPDIR. If the variable is empty, skopeo ends up defaulting to /var/tmp.

I'm also adding /var/tmp as a default directory to the gitlab-runner to avoid issues like this in the future.

@CMCDragonkai
Copy link
Member Author

Ok the TMPDIR problem is solved. But I realised that I cannot actually do this:

    TEMP: "$TMPDIR"
    TMP: "$TMPDIR"

On the windows runners it appears to create a problem:

Updating/initializing submodules recursively with git depth set to 50...
bash.exe: warning: could not find /tmp, please create!
bash.exe: warning: could not find /tmp, please create!
bash.exe: warning: could not find /tmp, please create!
bash.exe: warning: could not find /tmp, please create!
bash.exe: warning: could not find /tmp, please create!
git-lfs/2.8.0 (GitHub; windows amd64; go 1.12.2; git 30af66bb)
bash.exe: warning: could not find /tmp, please create!

Not sure what these are for. So redefining the windows temporary files is a no-go. That's ok for now since we aren't reliant on the TMPDIR on windows just yet.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 2, 2022

As for REGISTRY_AUTH_FILE, the token acquired by aws is only valid for 12 hours. https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login-password.html

This means we have to acquire this directly in the job instead of setting it as a long term credential.

Basically we have to exchange the long term credential of AWS_ for short term credentials of ECR...

@CMCDragonkai
Copy link
Member Author

Image deployment worked:

Login Succeeded!
> @matrixai/polykey@1.0.0 deploy-image
> ./scripts/deploy-image.sh "./builds/1jca3jj9wsad0i36v7235q52726885jg-docker-image-polykey-1.0.0.tar.gz"
Getting image source signatures
Copying blob sha256:29030a29f96a9f8d23d38db478a6a5a59adf68f4a84861aac1e4c209eb335567
Copying config sha256:947814ff9317c3e391359900a3746c39929300a7892c52e632b6f391c3af1be9
Writing manifest to image destination
Storing signatures
Getting image source signatures
Copying blob sha256:995e628c5c5f81059b652adeaa3f[425](https://gitlab.com/MatrixAI/open-source/js-polykey/-/jobs/2670604179#L425)ac3bb135f395ee7859927043fe2816783
Copying config sha256:947814ff9317c3e391359900a3746c39929300a7892c52e632b6f391c3af1be9
Writing manifest to image destination
Storing signatures

@CMCDragonkai
Copy link
Member Author

Creating an ECS cluster can be done via the CLI like this:

aws ecs create-cluster \
  --cluster-name 'polykey-testnet' \
  --capacity-providers 'FARGATE' \
  --default-capacity-provider-strategy 'capacityProvider=FARGATE' \
  --output json

If the cluster has already been created and with the same parameters, it just returns information that already exists.

The only issue is the cluster doesn't exist or was created with different parameters, this command actually returns an error:

An error occurred (InvalidParameterException) when calling the CreateCluster operation: Arguments on this idempotent request are inconsistent with arguments used in previous request(s).

The only useful thing to do is to then do aws ecs update-cluster.

What we need to decide is to what extent we expect infrastructure to be created from the js-polykey repository. Do we want to orchestrate the entire AWS here, or do we already have expectations of certain things already being setup.

I think we can do something simple right now, and rely on these idempotent commands that basically specify desired state, except for the fact that partial changes are not possible without performing updates.

@CMCDragonkai
Copy link
Member Author

Something I didn't realise before is that the AWS's awsvpc networking mode for fargate containers have some automatic DNS server being used. I can't find any docs on what dns servers AWS uses, but it's not possible to inject our own dns servers into it:

An error occurred (ClientException) when calling the RegisterTaskDefinition operation: DNS servers are not supported on container when networkMode=awsvpc.

So we just have to use what they provide.

@CMCDragonkai
Copy link
Member Author

The deploy-service.sh now creates the cluster and also registers a task definition.

While creating the cluster was idempotent, the registration of the task definition is not, it just adds a new task definition all the time. It's sort of a waste to have loads of old task definitions around, especially if nothing actually changed.

For now, as we generate new task definitions, once we perform the updated service, old task definitions can be automatically garbage collected (we may keep at least the last 10 just in case things change).

It also turns out we don't need the full arn for the ecsTaskExecutionRole, the name is sufficient. But in the future if we need it:

aws --profile=matrix iam get-role --role-name 'ecsTaskExecutionRole' | jq -r '.Role.Arn'

@CMCDragonkai
Copy link
Member Author

Ok so I'm going to push the image here to using the $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD just like our gitlab system.

.env.example Outdated Show resolved Hide resolved
.env.example Outdated Show resolved Hide resolved
scripts/deploy-image.sh Outdated Show resolved Hide resolved
@CMCDragonkai
Copy link
Member Author

Successful deployment! https://gitlab.com/MatrixAI/open-source/js-polykey/-/jobs/2700813855

Do note the usage of $' as a way to start a single quoted string that supports C style escapes. See: https://www.baeldung.com/linux/single-quote-within-single-quoted-string This tripped me up a bit! @tegefaulkes @emmacasolin

@CMCDragonkai
Copy link
Member Author

The deployment is all working.

Now I'm updating to 22.05, and that also worked.

Additionally, the deploy-service.sh is producing a bit too much detail, by applying the --query option, we can filter down the resulting JSON to only what is needed to know.

So now the upon integration:deployment, the testnet service is redeployed, and this will be available for integration testing jobs.

At the same time, at the release:distribution job, that will produce a release to the gitlab container registry as well. We may also release one to the github container registry since we are pushing releases as part of the github tag/release page.

@CMCDragonkai
Copy link
Member Author

Once merged, it will still depend on tests passing, and only occur in the staging branch, so that's still a TODO.

@CMCDragonkai
Copy link
Member Author

Just tested an agent start... there's some errors to be fixed with testnet too.

@CMCDragonkai
Copy link
Member Author

Last few things todo:

  1. Scope AWS key down to only read/write for ECR, and updating ECS service
  2. Take down scaffolding for the .gitlab-ci.yml file, and test the gitlab container image deployment
  3. Split up some commits

@CMCDragonkai
Copy link
Member Author

This blog post https://www.opensourcerers.org/2020/11/16/container-images-multi-architecture-manifests-ids-digests-whats-behind/ provides some interesting information on multi-architecture container images. I noticed that AWS now offers ARM architecture instances, and they are in fact cheaper than x86. Probably due to ARM CPU efficiencies.

@CMCDragonkai
Copy link
Member Author

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 11, 2022

ECS deployment now looks like this:

Deploying ECS service
{
    "serviceName": "polykey-testnet",
    "serviceArn": "arn:aws:ecs:[MASKED]:015248367786:service/polykey-testnet/polykey-testnet",
    "status": "ACTIVE",
    "deployments": [
        {
            "id": "ecs-svc/7051342343694491867",
            "status": "PRIMARY",
            "rolloutState": "IN_PROGRESS",
            "rolloutStateReason": "ECS deployment ecs-svc/7051342343694491867 in progress."
        },
        {
            "id": "ecs-svc/1681361063192108147",
            "status": "ACTIVE",
            "rolloutState": "COMPLETED",
            "rolloutStateReason": "ECS deployment ecs-svc/1681361063192108147 completed."
        }
    ]
}

@CMCDragonkai
Copy link
Member Author

Thinking about the deployment from testnet to mainnet.

The way we upload images right now is to always associate them as the latest image.

However we should not point them as the latest image until we are releasing to mainnet.

Otherwise it's possible that mainnet will pick up the latest image that is only for testnet.

There's a few ways to solve this...

We could create 2 ECR repositories, one for polykey-testnet and one for polykey-mainnet. This seems wasteful, since it's going to be using the same anyway.

Use the same ECR repository, but don't use the latest tag. Instead update the task definition to use a specific tag that was just built. This however is not a good idea either because of aws/aws-sdk#406. We end up having to specify the WHOLE task definition... but at the same time, it's good to have a place to fully specify it out. But I think its mixing up the responsibilities. Specifying the task definition is the responsibility of Polykey-Infrastructure.

Now lastly we can make use of tags themselves. Rather than using just the latest tag. We create another set of tags called testnet and mainnet. These represent the tagged images that we want testnet and mainnet to be using.

The latest tag can still be used to indicate the "latest" release which would not necessarily be what end-users should be using, since it is a the latest tag, but not the stable tag.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented Jul 11, 2022

Some logs regarding the strange behaviour connecting to testnet. Will be useful to you @emmacasolin

npm run polykey -- agent start --seed-nodes="v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg@54.252.176.51:1314" --verbose --format json

Local logs:

INFO:PolykeyAgent:Creating PolykeyAgent
INFO:PolykeyAgent:Setting umask to 077
INFO:PolykeyAgent:Setting node path to /home/cmcdragonkai/.local/share/polykey
INFO:Status:Starting Status
INFO:Status:Writing Status to /home/cmcdragonkai/.local/share/polykey/status.json
INFO:Status:Status is STARTING
INFO:Schema:Creating Schema
INFO:Schema:Starting Schema
INFO:Schema:Setting state path to /home/cmcdragonkai/.local/share/polykey/state
INFO:Schema:Started Schema
INFO:Schema:Created Schema
INFO:KeyManager:Creating KeyManager
INFO:KeyManager:Setting keys path to /home/cmcdragonkai/.local/share/polykey/state/keys
INFO:KeyManager:Starting KeyManager
INFO:KeyManager:Checking /home/cmcdragonkai/.local/share/polykey/state/keys/root.pub and /home/cmcdragonkai/.local/share/polykey/state/keys/root.key
INFO:KeyManager:Reading /home/cmcdragonkai/.local/share/polykey/state/keys/root.pub and /home/cmcdragonkai/.local/share/polykey/state/keys/root.key
INFO:KeyManager:Checking /home/cmcdragonkai/.local/share/polykey/state/keys/root.crt
INFO:KeyManager:Reading /home/cmcdragonkai/.local/share/polykey/state/keys/root.crt
INFO:KeyManager:Checking /home/cmcdragonkai/.local/share/polykey/state/keys/db.key
INFO:KeyManager:Reading /home/cmcdragonkai/.local/share/polykey/state/keys/db.key
INFO:KeyManager:Started KeyManager
INFO:KeyManager:Created KeyManager
INFO:DB:Creating DB
INFO:DB:Starting DB
INFO:DB:Setting DB path to /home/cmcdragonkai/.local/share/polykey/state/db
INFO:DB:Started DB
INFO:DB:Created DB
INFO:IdentitiesManager:Creating IdentitiesManager
INFO:IdentitiesManager:Starting IdentitiesManager
INFO:IdentitiesManager:Started IdentitiesManager
INFO:IdentitiesManager:Created IdentitiesManager
INFO:Sigchain:Creating Sigchain
INFO:Sigchain:Starting Sigchain
INFO:Sigchain:Started Sigchain
INFO:Sigchain:Created Sigchain
INFO:ACL:Creating ACL
INFO:ACL:Starting ACL
INFO:ACL:Started ACL
INFO:ACL:Created ACL
INFO:GestaltGraph:Creating GestaltGraph
INFO:GestaltGraph:Starting GestaltGraph
INFO:GestaltGraph:Started GestaltGraph
INFO:GestaltGraph:Created GestaltGraph
INFO:Proxy:Creating Proxy
INFO:Proxy:Created Proxy
INFO:NodeGraph:Creating NodeGraph
INFO:NodeGraph:Starting NodeGraph
INFO:NodeGraph:Started NodeGraph
INFO:NodeGraph:Created NodeGraph
INFO:NodeManager:Starting NodeManager
INFO:NodeManager:Started NodeManager
INFO:Discovery:Creating Discovery
INFO:Discovery:Starting Discovery
INFO:Discovery:Started Discovery
INFO:Discovery:Created Discovery
INFO:NotificationsManager:Creating NotificationsManager
INFO:NotificationsManager:Starting NotificationsManager
INFO:NotificationsManager:Started NotificationsManager
INFO:NotificationsManager:Created NotificationsManager
INFO:VaultManager:Creating VaultManager
INFO:VaultManager:Setting vaults path to /home/cmcdragonkai/.local/share/polykey/state/vaults
INFO:VaultManager:Starting VaultManager
INFO:DB:Creating DB
INFO:DB:Starting DB
INFO:DB:Setting DB path to /home/cmcdragonkai/.local/share/polykey/state/vaults/efs
INFO:DB:Started DB
INFO:DB:Created DB
INFO:INodeManager:Creating INodeManager
INFO:INodeManager:Starting INodeManager
INFO:INodeManager:Started INodeManager
INFO:INodeManager:Created INodeManager
INFO:EncryptedFileSystem:Starting EncryptedFS
INFO:EncryptedFileSystem:Started EncryptedFS
INFO:VaultManager:Started VaultManager
INFO:VaultManager:Created VaultManager
INFO:SessionManager:Creating SessionManager
INFO:SessionManager:Starting SessionManager
INFO:SessionManager:Started SessionManager
INFO:SessionManager:Created SessionManager
INFO:PolykeyAgent:Starting PolykeyAgent
INFO:GRPCServerClient:Starting GRPCServer on 127.0.0.1:0
INFO:GRPCServerClient:Started GRPCServer on 127.0.0.1:41137
INFO:GRPCServerAgent:Starting GRPCServer on 127.0.0.1:0
INFO:GRPCServerAgent:Started GRPCServer on 127.0.0.1:37675
INFO:Proxy:Starting Forward Proxy from 127.0.0.1:0 to 0.0.0.0:0 and Reverse Proxy from 0.0.0.0:0 to 127.0.0.1:37675
INFO:Proxy:Started Forward Proxy from 127.0.0.1:43647 to 0.0.0.0:42838 and Reverse Proxy from 0.0.0.0:42838 to 127.0.0.1:37675
INFO:Queue:Starting Queue
INFO:Queue:Started Queue
INFO:NodeConnectionManager:Starting NodeConnectionManager
INFO:NodeConnectionManager:Started NodeConnectionManager
INFO:NodeConnectionManager:Syncing nodeGraph
INFO:NodeConnectionManager:Getting connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:NodeConnectionManager:no existing entry, creating connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:NodeConnection 54.252.176.51:1314:Creating NodeConnection
INFO:clientFactory:Creating GRPCClientAgent connecting to 54.252.176.51:1314
INFO:Proxy:Handling CONNECT to 54.252.176.51:1314
INFO:ConnectionForward 54.252.176.51:1314:Starting Connection Forward
INFO:ConnectionForward 54.252.176.51:1314:Started Connection Forward
INFO:ConnectionForward 54.252.176.51:1314:Composing Connection Forward
INFO:ConnectionForward 54.252.176.51:1314:Composed Connection Forward
INFO:Proxy:Handled CONNECT to 54.252.176.51:1314
INFO:clientFactory:Created GRPCClientAgent connecting to 54.252.176.51:1314
INFO:NodeConnection 54.252.176.51:1314:Created NodeConnection
INFO:NodeConnectionManager:Getting connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:NodeConnectionManager:existing entry found for v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:NodeConnectionManager:withConnF calling function with connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:Status:Finish Status STARTING
INFO:Status:Writing Status to /home/cmcdragonkai/.local/share/polykey/status.json
INFO:Status:Status is LIVE
INFO:PolykeyAgent:Started PolykeyAgent
INFO:PolykeyAgent:Created PolykeyAgent
INFO:WorkerManager:Creating WorkerManager
INFO:WorkerManager:Created WorkerManager
{"pid":3964675,"nodeId":"vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg","clientHost":"127.0.0.1","clientPort":41137,"agentHost":"127.0.0.1","agentPort":37675,"proxyHost":"0.0.0.0","proxyPort":42838,"forwardHost":"127.0.0.1","forwardPort":43647}
INFO:NodeConnectionManager:Getting connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:NodeConnectionManager:existing entry found for v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:NodeConnectionManager:withConnF calling function with connection to v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg
INFO:Proxy:Handling connection from 54.252.176.51:1314
INFO:ConnectionReverse 54.252.176.51:1314:Starting Connection Reverse
INFO:ConnectionReverse 54.252.176.51:1314:Started Connection Reverse
INFO:ConnectionReverse 54.252.176.51:1314:Composing Connection Reverse
INFO:ConnectionReverse 54.252.176.51:1314:Composed Connection Reverse
INFO:PolykeyAgent:Reverse connection adding v41kmvhtpj84vfdplp45cjvg0te66pl9t3fr5k7p6vf58lqdku3fg:54.252.176.51:1314 to NodeGraph
INFO:Proxy:Handled connection from 54.252.176.51:1314
^CINFO:WorkerManager:Destroying WorkerManager
INFO:WorkerManager:Destroyed WorkerManager
INFO:PolykeyAgent:Stopping PolykeyAgent
INFO:Status:Begin Status STOPPING
INFO:Status:Writing Status to /home/cmcdragonkai/.local/share/polykey/status.json
INFO:Status:Status is STOPPING
INFO:SessionManager:Stopping SessionManager
INFO:SessionManager:Stopped SessionManager
INFO:NotificationsManager:Stopping NotificationsManager
INFO:NotificationsManager:Stopped NotificationsManager
INFO:VaultManager:Stopping VaultManager
INFO:EncryptedFileSystem:Stopping EncryptedFS
INFO:INodeManager:Stopping INodeManager
INFO:INodeManager:Stopped INodeManager
INFO:DB:Stopping DB
INFO:DB:Stopped DB
INFO:EncryptedFileSystem:Stopped EncryptedFS
INFO:VaultManager:Stopped VaultManager
INFO:Discovery:Stopping Discovery
INFO:Discovery:Stopped Discovery
INFO:NodeConnectionManager:Stopping NodeConnectionManager
INFO:NodeConnection 54.252.176.51:1314:Destroying NodeConnection
INFO:clientFactory:Destroying GRPCClientAgent connected to 54.252.176.51:1314
INFO:clientFactory:Destroyed GRPCClientAgent connected to 54.252.176.51:1314
INFO:NodeConnection 54.252.176.51:1314:Destroyed NodeConnection
INFO:NodeConnectionManager:Stopped NodeConnectionManager
INFO:NodeGraph:Stopping NodeGraph
INFO:NodeGraph:Stopped NodeGraph
INFO:NodeManager:Stopping NodeManager
INFO:NodeManager:Stopped NodeManager
INFO:Queue:Stopping Queue
INFO:Queue:Stopped Queue
INFO:Proxy:Stopping Proxy Server
INFO:ConnectionForward 54.252.176.51:1314:Stopping Connection Forward
INFO:ConnectionReverse 54.252.176.51:1314:Stopping Connection Reverse
INFO:ConnectionReverse 54.252.176.51:1314:Stopped Connection Reverse
WARN:ConnectionForward 54.252.176.51:1314:Client Error: ErrorConnectionEndTimeout
INFO:ConnectionForward 54.252.176.51:1314:Stopped Connection Forward
INFO:Proxy:Stopped Proxy Server
INFO:GRPCServerAgent:Stopping GRPCServer
INFO:GRPCServerAgent:Stopped GRPCServer
INFO:GRPCServerClient:Stopping GRPCServer
INFO:GRPCServerClient:Stopped GRPCServer
INFO:GestaltGraph:Stopping GestaltGraph
INFO:GestaltGraph:Stopped GestaltGraph
INFO:ACL:Stopping ACL
INFO:ACL:Stopped ACL
INFO:Sigchain:Stopping Sigchain
INFO:Sigchain:Stopped Sigchain
INFO:IdentitiesManager:Stopping IdentitiesManager
INFO:IdentitiesManager:Stopped IdentitiesManager
INFO:DB:Stopping DB
INFO:DB:Stopped DB
INFO:KeyManager:Stopping KeyManager
INFO:KeyManager:Stopped KeyManager
INFO:Schema:Stopping Schema
INFO:Schema:Stopped Schema
INFO:Status:Stopping Status
INFO:Status:Writing Status to /home/cmcdragonkai/.local/share/polykey/status.json
INFO:Status:Status is DEAD
INFO:PolykeyAgent:Stopped PolykeyAgent

Then on the testnet node:

INFO:Proxy:Handling connection from 120.18.194.227:3178
--
INFO:ConnectionReverse 120.18.194.227:3178:Starting Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3178:Started Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3178:Composing Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3178:Composed Connection Reverse
INFO:PolykeyAgent:Reverse connection adding vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg:120.18.194.227:3178 to NodeGraph
INFO:Proxy:Handled connection from 120.18.194.227:3178
INFO:NodeConnectionManager:Getting connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:no existing entry, creating connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:ConnectionForward 120.18.194.227:3178:Starting Connection Forward
INFO:ConnectionForward 120.18.194.227:3178:Started Connection Forward
INFO:NodeConnectionManager:Getting connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:no existing entry, creating connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnection 120.18.194.227:3178:Creating NodeConnection
INFO:clientFactory:Creating GRPCClientAgent connecting to 120.18.194.227:3178
INFO:Proxy:Handling CONNECT to 120.18.194.227:3178
INFO:ConnectionForward 120.18.194.227:3178:Composing Connection Forward
INFO:ConnectionForward 120.18.194.227:3178:Composed Connection Forward
INFO:Proxy:Handled CONNECT to 120.18.194.227:3178
INFO:clientFactory:Created GRPCClientAgent connecting to 120.18.194.227:3178
INFO:NodeConnection 120.18.194.227:3178:Created NodeConnection
INFO:NodeConnectionManager:withConnF calling function with connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorNodeGraphNodeIdNotFound
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorProxyConnectInvalidUrl
INFO:NodeConnectionManager:Getting connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:no existing entry, creating connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:Getting connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:existing entry found for vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:withConnF calling function with connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorNodeGraphNodeIdNotFound
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorProxyConnectInvalidUrl
INFO:NodeConnectionManager:Getting connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:no existing entry, creating connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:Getting connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:existing entry found for vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:withConnF calling function with connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorNodeGraphNodeIdNotFound
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorProxyConnectInvalidUrl
INFO:NodeConnectionManager:Getting connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:no existing entry, creating connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:Getting connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:existing entry found for vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:withConnF calling function with connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorNodeGraphNodeIdNotFound
INFO:ConnectionReverse 120.18.194.227:3178:Stopping Connection Reverse
INFO:ConnectionForward 120.18.194.227:3178:Stopping Connection Forward
INFO:clientFactory:Destroying GRPCClientAgent connected to 120.18.194.227:3178
INFO:NodeConnection 120.18.194.227:3178:Destroying NodeConnection
INFO:NodeConnection 120.18.194.227:3178:Destroyed NodeConnection
INFO:clientFactory:Destroyed GRPCClientAgent connected to 120.18.194.227:3178
INFO:ConnectionReverse 120.18.194.227:3178:Stopped Connection Reverse
INFO:ConnectionForward 120.18.194.227:3178:Stopped Connection Forward
INFO:Proxy:Handling connection from 120.18.194.227:3158
INFO:ConnectionReverse 120.18.194.227:3158:Starting Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3158:Started Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3158:Composing Connection Reverse
INFO:ConnectionReverse 120.18.194.227:3158:Composed Connection Reverse
INFO:PolykeyAgent:Reverse connection adding vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg:120.18.194.227:3158 to NodeGraph
INFO:Proxy:Handled connection from 120.18.194.227:3158
INFO:NodeConnectionManager:Getting connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:NodeConnectionManager:no existing entry, creating connection to vunf6nb9p4ag1rravoqfh5cfcbn083sf89coa71knsgtks4o5ka40
INFO:ConnectionForward 120.18.194.227:3158:Starting Connection Forward
INFO:ConnectionForward 120.18.194.227:3158:Started Connection Forward
INFO:NodeConnectionManager:Getting connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnectionManager:no existing entry, creating connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:NodeConnection 120.18.194.227:3158:Creating NodeConnection
INFO:clientFactory:Creating GRPCClientAgent connecting to 120.18.194.227:3158
INFO:Proxy:Handling CONNECT to 120.18.194.227:3158
INFO:ConnectionForward 120.18.194.227:3158:Composing Connection Forward
INFO:ConnectionForward 120.18.194.227:3158:Composed Connection Forward
INFO:Proxy:Handled CONNECT to 120.18.194.227:3158
INFO:clientFactory:Created GRPCClientAgent connecting to 120.18.194.227:3158
INFO:NodeConnection 120.18.194.227:3158:Created NodeConnection
INFO:NodeConnectionManager:withConnF calling function with connection to vvealmnbubpvvqtf9iutjuq78g0ehoia8dedevsskfmut0376pofg
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
INFO:ConnectionForward 52.62.120.25:1314:Starting Connection Forward
ERROR:GRPCClientAgentService:nodesHolePunchMessageSend:ErrorNodeGraphNodeIdNotFound
INFO:ConnectionReverse 120.18.194.227:3158:Stopping Connection Reverse
INFO:ConnectionForward 120.18.194.227:3158:Stopping Connection Forward
INFO:clientFactory:Destroying GRPCClientAgent connected to 120.18.194.227:3158
INFO:NodeConnection 120.18.194.227:3158:Destroying NodeConnection
INFO:NodeConnection 120.18.194.227:3158:Destroyed NodeConnection
INFO:clientFactory:Destroyed GRPCClientAgent connected to 120.18.194.227:3158
INFO:ConnectionReverse 120.18.194.227:3158:Stopped Connection Reverse
INFO:ConnectionForward 120.18.194.227:3158:Stopped Connection Forward

… stages of the pipeline

* `integration:deployment` - deploys to testenet
* `integration:prerelease` - deploys to GitLab container registry as `testnet`
* `release:deployment:branch` - deploys to mainnet
* `release:deployment:tag` - deploys to mainnet
* `release:distribution` - deploys to GitLab container registry as `mainnet`

mainnet deployment is still a stub
@CMCDragonkai CMCDragonkai marked this pull request as ready for review July 11, 2022 09:40
@CMCDragonkai CMCDragonkai changed the title WIP: Testnet Deployment via CI/CD Testnet Deployment via CI/CD Jul 11, 2022
@CMCDragonkai CMCDragonkai merged commit a5010c0 into staging Jul 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Use a single --port argument for authorize/revoke operations in EC2
1 participant