-
Notifications
You must be signed in to change notification settings - Fork 826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update GKE development tooling to 1.12 #887
Update GKE development tooling to 1.12 #887
Conversation
/cc @heartrobotninja do the GKE parameters look good to you here? I ran most of the e2e tests on my local development cluster, and things look good! |
Build Succeeded 👏 Build Id: 70b869ac-285a-4b3b-8280-d904434e9f0a The following development artifacts have been built, and will exist for the next 30 days:
A preview of the website (the last 30 builds are retained): To install this version:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
nodePools: | ||
- name: "default" | ||
initialNodeCount: 4 | ||
initialNodeCount: 8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8*4 = 32 which i believe is larger than the default quota in a zone for new projects. Do we need to bump this for individuals to run e2e tests? I know the shared e2e cluster has 8 nodes, but in my very limited testing with my own e2e cluster 4 seemed sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is for the agones-images project, not really for individuals (although they are welcome to use it too). We run a higher parallel rate (32 from memory), and need the extra room.
In fact, given that e2e now takes 8 minutes, we could bump that up a bit too - hence the extra space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I didn't realize that this was only used for e2e. The e2e cluster already has 8 nodes, so this just makes that consistent.
password: supersecretpassword | ||
legacyAbac: | ||
enabled: {{ properties["cluster.legacyAbac"] }} | ||
username: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should remove the masterAuth block. From gcloud WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The username to use for HTTP basic authentication to the master endpoint. For clusters v1.6.0 and later, basic authentication can be disabled by leaving username unspecified (or setting it to the empty string).
(link) thought it might be good to explicitly turn it off. But removing it will do the same thing, so I'll remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a test last night and removing the block for master auth in terraform does the right thing in 1.12 (no basic auth, no client cert). It sounds like deployment mgr works the same way, which is nice.
Updated to 1.12: - GKE dev cluster deployment manager template - e2e dev cluster deployment manager template - kubectl version
78c3489
to
2b00f49
Compare
Build Succeeded 👏 Build Id: 7d5c195d-817c-4b5d-a7bd-365ab0a899eb The following development artifacts have been built, and will exist for the next 30 days:
A preview of the website (the last 30 builds are retained): To install this version:
|
/lgtm |
Updated to 1.12:
Work on #717