Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Azure Terraform config to actually create dask pools #839

Merged
merged 3 commits into from
Nov 19, 2021
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions terraform/azure/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ resource "azurerm_kubernetes_cluster_node_pool" "user_pool" {
resource "azurerm_kubernetes_cluster_node_pool" "dask_pool" {
# If dask_nodes is set, we use that. If it isn't, we use notebook_nodes.
# This lets us set dask_nodes to an empty array to get no dask nodes
for_each = try(var.dask_nodes, var.notebook_nodes)
for_each = length(var.dask_nodes) == 0 ? var.notebook_nodes : var.dask_nodes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I used try instead in #834 is that without that, I can't tell JIL to not have any dask nodes - nil or null weren't accepted as variables values, and the default of {} meant it wasn't possible to figure out a way to not have any nodes :( Maybe if the default for dask_nodes is set to something other than {} might help? Not sure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm not sure how to fix this. Because without the suggested change I'm struggling to tell carbon plan to create the nodes. Unless we switch to not automatically creating a dask pool from defined notebook pools

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that terraform is not a procedural language, we may be trying to do something that is too complex for it's capabilities

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, how about we:

  1. Remove the one liner setting dask_nodes = {} in the JIL tfvars,
  2. Merge this as is?

This means that when I tf apply JIL later, it'll also create those extra nodepools, but they should be empty and cost nothing. Let's just do that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems ok to me for the time being - I've pushed that change


name = "dask${each.key}"
kubernetes_cluster_id = azurerm_kubernetes_cluster.jupyterhub.id
Expand All @@ -129,16 +129,12 @@ resource "azurerm_kubernetes_cluster_node_pool" "dask_pool" {

vm_size = each.value.vm_size
node_labels = {
"hub.jupyter.org/node-purpose" = "user",
"k8s.dask.org/node-purpose" = "scheduler",
# Explicitly set this label, so the cluster autoscaler recognizes it
# Without this, it doesn't seem to bring up nodes in the correct
# nodepool when necessary
"node.kubernetes.io/instance-type" = each.value.vm_size
"k8s.dask.org/node-purpose" = "worker",
"hub.jupyter.org/node-size" = each.value.vm_size
}

node_taints = [
"hub.jupyter.org_dedicated=user:NoSchedule"
"k8s.dask.org_dedicated=worker:NoSchedule"
]


Expand Down