Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for additional volumes in launch templates and launch configurations #800

Merged
merged 9 commits into from
Mar 24, 2020

Conversation

jaimehrubiks
Copy link
Contributor

PR o'clock

Description

Now the user can specify additional volumes for its instances. This is
specially useful when the cluster is going to be used for specific
applications such as storage.

Example 1 (Just required fields)

additional_volumes = [{
    block_device_name = "/dev/sdb",
    volume_size = 200,
    delete_on_termination = false
}]

Example 2 (Multiple additional volumes & All features)

additional_volumes = [{
    block_device_name = "/dev/sdb",
    volume_size = 200,
    volume_type = "io1"
    iops = 100
    encrypted = true
    kms_key_id = "arn:aws:kms:us-east-1:89349:key/a4-ds-34"
    delete_on_termination = false
},{
    block_device_name = "/dev/sdc",
    volume_size = 500,
    delete_on_termination = false
}]

Relevant Issues:

#687
#572

Note: I only implemented this feature on launch template workers

Checklist

@jaimehrubiks
Copy link
Contributor Author

Why is the code format failing on the local.tf?

@js-timbirkett
Copy link

Is there a valid use case for extra volumes mounted to worker nodes? Can the same be done with managed nodes?

My thinking is that Kubernetes nodes are autoscaled and would be recycled / scaled up and down by the Cluster Autoscaler. EBS volumes are generally destroyed when an instance is terminated.

I'd think that stateful workloads in need of block storage such as Kafka, MySQL or Ceph would use PersistentVolume's and PersistentVolumeClaims along with an appropriate CSI driver to create, mount and handle lifecycle of volumes?

I might be wrong and may have missed some valuable context somewhere. I'm also not an expert in that field so always appreciate these learning opportunities 😄

@jaimehrubiks
Copy link
Contributor Author

jaimehrubiks commented Mar 17, 2020

First, I am also learning, not an expert at all. For most stateful applications, as you mention, the CSI way to claim storage is the way to request storage. However, for storage itself, say, Ceph, I found that it might be easier to manage and control to manually specify or add devices, and have a list of specific nodes and their devices. In my case, I was using rook.io, and to make it similar to on-prem, this was the only way. Of course, I would love to see other people's opinion on this matter. And of course, for this I would specify that ASG should not scale up/down automatically (protect_from_scale_in = true)

@snstanton
Copy link
Contributor

snstanton commented Mar 17, 2020

With the new AWS bottlerocket os, the AMI for the worker nodes automatically mounts a secondary volume with a default size of 20GB to act as the containerd cache for the node. In order to change the size of this volume, I need the ability to override this default size to something larger. I literally just wrote something very similar to this patch, so it would be great to get the PR merged in.

@snstanton
Copy link
Contributor

It looks like the lint issues are a missing white space at workers_launch_template.tf:355 and too much white space before the comment at local.tf:77. If you run terraform fmt on the files, it will fix the lint issues for you.

@jaimehrubiks
Copy link
Contributor Author

jaimehrubiks commented Mar 18, 2020

I messed up with my git history a little bit, I think I fixed it. (Sorry, I'm not very expert)

@snstanton, does this PR help your use case? How would you specify which volumes is the one that is being used for the cache?

@jaimehrubiks jaimehrubiks reopened this Mar 18, 2020
Now the user can specify additional volumes for its instances. This is
specially useful when the cluster is going to be used for specific
applications such as storage.

Example 1 (Just required fields)

    additional_volumes = [{
        block_device_name = "/dev/sdb",
        volume_size = 200,
        delete_on_termination = false
    }]

Example 2 (Multiple additional volumes & All features)

    additional_volumes = [{
        block_device_name = "/dev/sdb",
        volume_size = 200,
        volume_type = "io1"
        iops = 100
        encrypted = true
        kms_key_id = "arn:aws:kms:us-east-1:89349:key/a4-ds-34"
        delete_on_termination = false
    },{
        block_device_name = "/dev/sdc",
        volume_size = 500,
        delete_on_termination = false
    }]
@snstanton
Copy link
Contributor

Yes, in fact I branched and added this PR and it works perfectly. Here's what I have in my config:

      additional_volumes     = [{ block_device_name = "/dev/xvdb", volume_size = 50 }]

@snstanton
Copy link
Contributor

You can fix the Semantic Pull Request failure by changing the PR title to something like:

feat: add support for additional volumes in launch templates

@jaimehrubiks jaimehrubiks changed the title Add additional volumes feature to launch templates feat: add additional volumes feature to launch templates Mar 18, 2020
@jaimehrubiks jaimehrubiks changed the title feat: add additional volumes feature to launch templates feat: add support for additional volumes in launch templates Mar 18, 2020
@barryib barryib changed the title feat: add support for additional volumes in launch templates Feat: add support for additional volumes in launch templates Mar 19, 2020
@barryib barryib changed the title Feat: add support for additional volumes in launch templates feat: add support for additional volumes in launch templates Mar 19, 2020
@barryib barryib changed the title feat: add support for additional volumes in launch templates feat: Add support for additional volumes in launch templates Mar 20, 2020
Copy link
Member

@barryib barryib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jaimehrubiks for your PR. I just added some minor comments/questions.

Furthermore, can you add the same logic in launch configuration ?

workers_launch_template.tf Outdated Show resolved Hide resolved
workers_launch_template.tf Show resolved Hide resolved
…lumes


As @barryib pointed out, locals already defaults an empty list, so it is no needed to specify it in the expression, making it more readable

Co-Authored-By: Thierno IB. BARRY <ibrahima.br@gmail.com>
@jaimehrubiks
Copy link
Contributor Author

Will take a look at launch configuration tomorrow, to see if it is similar.

@jaimehrubiks jaimehrubiks changed the title feat: Add support for additional volumes in launch templates feat: Add support for additional volumes in launch templates and launch configurations Mar 23, 2020
@jaimehrubiks
Copy link
Contributor Author

All changes applied. I did not test this feature on working groups (launch configuration)

@barryib
Copy link
Member

barryib commented Mar 24, 2020

All changes applied. I did not test this feature on working groups (launch configuration)

Nice @jaimehrubiks. Thanks for your contribution.

@barryib barryib merged commit de00694 into terraform-aws-modules:master Mar 24, 2020
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants