Skip to content

Commit

Permalink
Deploy aws readme (#45)
Browse files Browse the repository at this point in the history
  • Loading branch information
kleineshertz authored Sep 23, 2023
1 parent 1f1f6fd commit 63b8c48
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 12 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ For more details about getting started, see [Getting started](doc/started.md). F
### [Script configuration](doc/scriptconfig.md)
### [Capillaries UI](ui/README.md)
### [Capillaries API](doc/api.md)
### [Capillaries deploy tool: Openstack cloud deployment](test/deploy/README.md)
### [Capillaries deploy tool: Openstack/AWS cloud deployment](test/deploy/README.md)
### [Glossary](doc/glossary.md)
### [Q & A](doc/qna.md)
### [Capillaries blog](https://capillaries.io/blog/index.html)
Expand Down
2 changes: 1 addition & 1 deletion doc/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ go run capitoolbelt.go get_run_status_diagram -script_file=../../../test/data/cf
```

## Deploy tool
This is not part of Capillaries framework. It's a command line tool that can be used to deploy a complete Capillaries-based solution in the public or private cloud that implements Openstack API. See full documentation [here](../test/deploy/README.md).
This is not part of Capillaries framework. It's a command line tool that can be used to deploy a complete Capillaries-based solution in the public or private cloud that implements Openstack API or in the AWS cloud. See full documentation [here](../test/deploy/README.md).

## Daemon
An executable that implements one or more [processors](#processor). Capillaries source code comes with a stock daemon that implements all supported [processor types](#processor-types), including [py_calc processor](#py_calc-processor) implemented as a [custom processor](#table_custom_tfm_table).
Expand Down
20 changes: 10 additions & 10 deletions test/deploy/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Working with Capillaries deploy tool

Capillaries [Deploy tool](../../doc/glossary.md#deploy-tool) can provision complete Capillaries cloud environment in public/private clouds that support [Openstack API](https://www.openstack.org).
Capillaries [Deploy tool](../../doc/glossary.md#deploy-tool) can provision complete Capillaries cloud environment in public/private clouds that support [Openstack API](https://www.openstack.org) or in AWS.

`test/deploy` directory contains two sample projects (capideploy_project_dreamhost.json and capideploy_project_genesis.json) used by [Deploy tool](../../doc/glossary.md#deploy-tool). Sensitive and repetitive configuration can be stored in project parameter files (capideploy_project_params_dreamhost.json and capideploy_project_params_genesis.json), and it's a good idea to store parameter files at somewhat secure location (like user home dir).
`test/deploy` directory contains sample project template (sampledeployment.jsonnet) and sample project (sampledeployment.json) used by [Deploy tool](../../doc/glossary.md#deploy-tool).

For troubleshooting, add `-verbose` argument to your deploy tool command line.

Expand All @@ -22,22 +22,22 @@ Capillaries configuration scripts and in/out data are stored on separate volumes
## Deployment project template (`*.jsonnet`) and deployment project (`*.json`) files

Capideploy tool uses deployment project file (see sample `sampledeployment.json`) to:
- configure creation of Openstack objects like instances and volumes and track status of those objects locally
- push Capillaries data and binaries to created Openstack deployment
- clean Openstack deployment
- configure creation of Openstack/AWS objects like instances and volumes and track status of those objects locally
- push Capillaries data and binaries to created Openstack/AWS deployment
- clean Openstack/AWS deployment

Deployment project files contain description and status of each instance. When there are a lot of instances that perform the same tesk (like Cassandra nodes or instances running Capillaries [Daemon](../../doc/glossary.md#daemon)) which makes them pretty redundant. To avoid creating repetitive configurations manually, use [jsonnet](https://jsonnet.org) templates like `sampledeployment.jsonnet`. Before deploying, make sure that you have generated a deployment project `*.json` file from the `*.jsonnet` template, and, under normal circumstances, avoid manual changes in your `*.json` file. Tweak `*.jsonnet` file and regenerate `*.json` instead, using jsonnet interpreter of your choice. Feel free to manually tweak `*.json` file if you really think you know what you are doing.

## Before deployment

1. Install [jq](https://jqlang.github.io/jq/). Adding jq to the list of requirements was not an easy decision, but without it, [start_cluster.sh](./start_cluster.sh) script that has to read configuration from the deployment project file would be unnecessary error-prone.

2. Make sure you have created the key pair for SSH access to the Openstack instances, key pair name stored in `root_key_name` in the project file. Through this document, we will be assuming the key pair is stored in `~/.ssh/` and the private key file this kind of name:
2. Make sure you have created the key pair for SSH access to the Openstack/AWS instances, key pair name stored in `root_key_name` in the project file. Through this document, we will be assuming the key pair is stored in `~/.ssh/` and the private key file this kind of name:
`sampledeployment002_rsa`.

3. If you want to use SFTP (instead of or along with NFS) for file sharing make sure all SFTP key files used referenced in deployment project `sampledeployment.json` are present.

4. Make sure all environment variables storing Capideploy and Openstack settings are set. For non-production environments, you may want to keep them in a separate private file and activate before deploying `source ~/sampledeployment.rc`:
4. Make sure all environment variables storing Capideploy and Openstack/AWS settings are set. For non-production environments, you may want to keep them in a separate private file and activate before deploying `source ~/sampledeployment.rc`:

```
# capideploy settings
Expand Down Expand Up @@ -358,7 +358,7 @@ $capideploy delete_floating_ip -prj=sampledeployment.json;

## Q&A

### Openstack environment variables
### Openstack/AWS environment variables

Q. The list of `OS_*` variables changes from one Openstack provider to another. Why?

Expand Down Expand Up @@ -396,9 +396,9 @@ A. This example works well when you need to quickly provision an environment wit

### Non-Openstack clouds

Q. Does Deploy tool work with clouds that do not support Openstack? AWS,Azure,GCP?
Q. Does Deploy tool work with clouds that do not support Openstack/AWS? Azure, GCP?

A. At the moment, no.
A. Starting Sep 2023, deploy tool supportd seployment to AWS. No support for Azure or GCP.

### Why should I use another custom deploy tool?

Expand Down

0 comments on commit 63b8c48

Please sign in to comment.