-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address issue with AWS instance type schema #2787
Conversation
@viniciusdc can you fill out the how to test section here? |
done. Thanks for catching it! |
@dcmcand This will address the issue you saw. Could you have another look? (Let me know if you see any other pydantic errors; they might show up again.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gpu: false
returns AL2_x86_64
gpu: true
returns AL2_X86_64_GPU
launch template with ami_id specified returns CUSTOM
whether gpu
is true
or false
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Reference Issues or PRs
closes #2782
What does this implement/fix?
Put a
x
in the boxes that applyTesting
How to test this PR?
As the previous error happened only during
terraform plan
you will need to mock the deployment of nebari, I suggest mofiying the source code to only plan the code instead of deploying (this should work for the second stage, since there are no pre-variable depency AFAIK) -- stage 1 can be skipped by using the state local instead of remote.nebari init aws --project testing-gpu
gpu: enabled
you should seeAL2_X86_64
if gpu is passed, then its variant should show up ``AL2_X86_64_GPU`node_template
block and test if by passing an AMI the intance_type also changes to CUSTOM.Any other comments?
I made a deployment on AWS from this branch and GPUs were working as expected