Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

making into terraform module #2

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

clayrosenthal
Copy link

Doing a few things to make this into a proper terraform module:

  • Moving all the things that were filled in a locals block into a variables.tf file
  • Moving the output to a separate outputs.tf file
  • Removing any resources/provisioners that created local files
    • Instead getting all that data from terraform variables themselves
    • Using content instead of source
  • Changing the null_resource to terraform_data, a newer and more powerful resource to do the same idea of a no-op resource
  • Adding relevant instructions to the README
    • How to use as a module and use as is in the repo
    • How to get the useful outputs and run relevant helm commands
  • Updating the nccl tests to pull the relevant nccl_topos from the hosts


```bash
kubectl apply -f https://raw.githubusercontent.com/kubeflow/mpi-operator/v0.4.0/deploy/v2beta1/mpi-operator.yaml
kubectl apply -f examples/nccl-test.yaml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you actually get the reading/result from this test?

EOT
headnode_entry = local.use_lb ? one(crusoe_compute_instance.k3s_lb) : one(crusoe_compute_instance.k3s_headnode)
ingress_interface = local.headnode_entry.network_interfaces[0]
headnode_has_gpu = strcontains(var.headnode_instance_type, "sxm-ib")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this future proof? Is it a reasonable assumption that the instance type will always have an SXM-connected GPU? Or do we foresee GPUs being supported in future which have a different connection type?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants