-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Push preliminary AWS deployment documentation #467
Conversation
This is the first iteration in the creation of a how-to guide describing how you can manually set up a AWS kops-based cluster and use our deployer to manually deploy a new hub on top of it. This is yet a very preliminary draft but with ppor styling but the instructions should be enough detailed so others can reproduce the process without major issues. Will submit this effort as a draft so others can actively contribute not only in the styling but also in the semantics (you will quickly notice this is not my mother language, so open to critics so I can improve myself ;-)
@choldgraf, your wizard-ish documentation wands are required here 😛 . Btw, this one, once it is finished should close #453 and #366 (in cascade). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great - thanks for documenting all of this @damianavila - I added in a few suggestions etc, just to make the links a bit easier to discover and follow. Perhaps once this is merged @sgibson91 would be interested in giving the instructions a whirl and reporting back where they are unclear :-)
Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
This is a "templated" command, then it should not contain specific names nor regions. Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
Co-authored-by: Chris Holdgraf <choldgraf@berkeley.edu>
@choldgraf thanks for all those comments and fixes. |
Sounds good - this looks good for a merge IMO once @yuvipanda's comment is addressed, we can iterate and improve upon the documentation in chunks as more team members follow these steps. |
Great, I will address the remaining comments, get the PR out of the draft state for a few hours and merge by tomorrow if I do not hear anything else from any of you. |
Previously, I have written a note about doing those things but did not provided any commands. This commit add those commands in a new 4th step, so the user does not need to guess the process.
Previously, I was suggesting to validate and fail (with a 10 minutes waiting time) before applying the CoreDNS patches. Since we already know the validation will fail, let's apply the patches **before** requesting the validation process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good to me, in terms of ease of reading and lots of cross-links to other resources 👍🏻 I'd be keen to give it a try on AWS at some point and provide further technical feedback 🤖
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor changes but otherwise LGTM!
|
||
```bash | ||
export AWS_PROFILE=<cluster_name> | ||
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an easier / safer way is to put these in ~/.aws/credentials
and just export AWS_PROFILE
. Does aws configure
populate ~/.aws/credentials
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does aws configure populate ~/.aws/credentials?
Yep, it does. In fact, those are already in the credentials file (this is why you can get them with aws configure get
).
But AFAIK, when you export the profile name, the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env variables are not exported as well and kops seems to need those accordingly to the docs (check the last box at that section): https://kops.sigs.k8s.io/getting_started/aws/#setup-iam-user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is minor, I will not block the merge on it.
We can always iterate and change it later down the road.
Btw, feel free to continue the discussion.
Because it is a command you can execute to configure the aws cli. Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
You could have several hubs in the same cluster and we are deploying a kops-based cluster here. Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
The kops guide STRONGLY recommends versioning your S3 bucket in case you ever need to revert or recover a previous state store. Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
So we do not need to change the name after the creation. Co-authored-by: Yuvi Panda <yuvipanda@gmail.com>
Since we are now creating the ssh key with the proper name, we do not need to rename it. Additionally, we can quickly regenerate the public key (added a note about that), so let's avoid checking it in the repo.
If you are deploying a `prod` hub alongside with the `staging` hub, you will need to repeat the CNAME configuration for `prod` as outlined in the step 7.
This is the first iteration in the creation of a how-to guide describing how you can manually set up an AWS kops-based cluster and use our deployer to manually deploy a new hub on top of it.
More details in the commit message (and in the document itself) 😉
Draft PR for now!