-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new OVH cluster #2414
new OVH cluster #2414
Conversation
so we can create a pull secret for chart images
Hello @minrk Here are some answers for yours questions : About the region :Your previous cluster was hosted in GRA region (which stands for Graveline) so I suggest to select the same. About the ovh token :They are all related to our API. About the s3 provider :Maybe this can help you Before to do that I guess you will have to created your s3 user and created your bucket |
About the registryI got an answer from the registry team : |
s3 bucket created by hand (just like gcs)
because one pull secret can't grant access to multiple projects this will be fixed in harbor 2.2
Thanks! I was able to migrate state to s3 on OVH. THe cluster is now deployed on a private network, which seems right. The node flavors don't quite line up how we're used to. We typically use GCP's 'highmem' nodes which have a ~7:1 ram:cpu ratio, whereas OVH offers ~4:1 or ~12:1, but not in between. We have used 4 cpu nodes for the core pool, and 8 cpu nodes for the user pool. I suggest we start with node pools:
Given the ratios in OVH quotas (~6 GB / core), we could also go with b2-60 (16 cpu, 60GB) for user nodes, and have ~twice the cpu headroom per user that we do on GKE. I also don't know if there's anything special we want to do with disks for image building. on GKE, we mount SSDs for this, but I think perhaps the boot disk SSDs are fine? At least okay for now. So if we're confident in the cluster setup so far, I can update the domain and merge to try to start deploying on CI. |
@mael-le-gal where can we see logs, e.g. kubernetes cluster event logs? I tried clicking on the 'Logs Data Platform' in the cloud project sidebar, but then it tried to get me to buy it on my personal account. How do we associate logs with an existing cloud project that has the credits, etc.? |
Quota has been bumped, so the cluster is now using b2-15 and r2-60. |
Currently I think you can't, except from the control panel UI directly. More info here The global OVHcloud offer is currently missing an integrated observability solution so that all logs from all services ends in the same place for customers. We are aware of that limitation and that's on the long term roadmap. |
@mael-le-gal can I associate a cloud public IP with a kubernetes load balancer? We do that in other deployments so that if the public kubernetes Service gets recreated, the IP doesn't get released. As far as I can tell, though, I can only associate public IPs with instances, not load balancers. |
now that it seems like we're keeping this cluster
OK! I think this is ready to go to the next step. The cluster is deployed, and I've deployed it manually with Note that this will not make ovh2 part of the federation. That's a separate step. This will just get the new cluster deployed from CI instead of my laptop, to ensure everything really is working. |
so's we don't forget
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked good to me overall! I just had some ideas related to imagePullSecret
default limit seems to get OOMKilled after our ban patches
"--limits", | ||
"memory=250Mi", | ||
"--requests", | ||
"memory=200Mi", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mael-le-gal coredns was getting OOMKilled after we applied our coredns config to ban a bunch of IPs, so we have to raise this a bit. I'm not sure if that's feedback that would be useful to you.
@consideRatio thanks for the review! Going ahead since the pull secret that had some review was entirely unnecessary and is now removed. |
implementing #2407
So far:
ovh2
So far it's simpler and less unique than the previous OVH cluster, since we are using standard credentials, letsencrypt, a load-balancer service, etc. Nothing is manually created and no nodes need special treatment.
Stil to do:
Questions for @mael-le-gal and OVH folks, especially since I'm new to the OVH tools:
backend = gcs
for google, and I've done the same with the OVH deploy, but it would probably be nice to store it in the OVH object store. I'm guessing the s3 backend can be configured with the right options, I just don't know what they are.Thanks!