Skip to content

hcourse-nydig/terraform-aws-msk-apache-kafka-cluster

 
 

Repository files navigation

terraform-aws-msk-apache-kafka-cluster

Latest Release Slack Community Discourse Forum

README Header

Cloud Posse

Terraform module to provision Amazon Managed Streaming for Apache Kafka

Note: this module is intended for use with an existing VPC. To create a new VPC, use terraform-aws-vpc module.


This project is part of our comprehensive "SweetOps" approach towards DevOps.

Terraform Open Source Modules

It's 100% Open Source and licensed under the APACHE2.

We literally have hundreds of terraform modules that are Open Source and well-maintained. Check them out!

Security & Compliance

Security scanning is graciously provided by Bridgecrew. Bridgecrew is the leading fully hosted, cloud-native solution providing continuous Terraform security and compliance.

Benchmark Description
Infrastructure Security Infrastructure Security Compliance
CIS KUBERNETES Center for Internet Security, KUBERNETES Compliance
CIS AWS Center for Internet Security, AWS Compliance
CIS AZURE Center for Internet Security, AZURE Compliance
PCI-DSS Payment Card Industry Data Security Standards Compliance
NIST-800-53 National Institute of Standards and Technology Compliance
ISO27001 Information Security Management System, ISO/IEC 27001 Compliance
SOC2 Service Organization Control 2 Compliance
CIS GCP Center for Internet Security, GCP Compliance
HIPAA Health Insurance Portability and Accountability Compliance

Usage

IMPORTANT: We do not pin modules to versions in our examples because of the difficulty of keeping the versions in the documentation in sync with the latest released versions. We highly recommend that in your code you pin the version to the exact version you are using so that your infrastructure remains stable, and update versions in a systematic way so that they do not catch you by surprise.

Also, because of a bug in the Terraform registry (hashicorp/terraform#21417), the registry shows many of our inputs as required when in fact they are optional. The table below correctly indicates which inputs are required.

Here's how to invoke this example module in your projects

module "kafka" {
  source                 = "https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster.git?ref=master"
  namespace              = "eg"
  stage                  = "prod"
  name                   = "app"
  vpc_id                 = "vpc-XXXXXXXX"
  zone_id                = "Z14EN2YD427LRQ"
  security_groups        = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
  subnet_ids             = ["subnet-XXXXXXXXX", "subnet-YYYYYYYY"]
  kafka_version          = "2.4.1"
  number_of_broker_nodes = 3
  broker_instance_type   = "kafka.m5.large"
}

Examples

Here is an example of using this module:

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 0.13.0
aws >= 2.0
local >= 1.2
random >= 2.2

Providers

Name Version
aws >= 2.0

Modules

Name Source Version
hostname cloudposse/route53-cluster-hostname/aws 0.12.0
this cloudposse/label/null 0.24.1

Resources

Name
aws_msk_cluster
aws_msk_configuration
aws_msk_scram_secret_association
aws_security_group
aws_security_group_rule

Inputs

Name Description Type Default Required
additional_tag_map Additional tags for appending to tags_as_list_of_maps. Not added to tags. map(string) {} no
allowed_cidr_blocks List of CIDR blocks to be allowed to connect to the cluster list(string) [] no
attributes Additional attributes (e.g. 1) list(string) [] no
broker_instance_type The instance type to use for the Kafka brokers string n/a yes
broker_volume_size The size in GiB of the EBS volume for the data drive on each broker node number 1000 no
certificate_authority_arns List of ACM Certificate Authority Amazon Resource Names (ARNs) to be used for TLS client authentication list(string) [] no
client_broker Encryption setting for data in transit between clients and brokers. Valid values: TLS, TLS_PLAINTEXT, and PLAINTEXT string "TLS" no
client_sasl_scram_enabled Enables SCRAM client authentication via AWS Secrets Manager. bool false no
client_sasl_scram_secret_association_arns List of AWS Secrets Manager secret ARNs for scram authentication. list(string) [] no
client_tls_auth_enabled Set true to enable the Client TLS Authentication bool false no
cloudwatch_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Cloudwatch Logs bool false no
cloudwatch_logs_log_group Name of the Cloudwatch Log Group to deliver logs to string null no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {}
}
no
delimiter Delimiter to be used between namespace, environment, stage, name and attributes.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
enabled Set to false to prevent the module from creating any resources bool null no
encryption_at_rest_kms_key_arn You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest string "" no
encryption_in_cluster Whether data communication among broker nodes is encrypted bool true no
enhanced_monitoring Specify the desired enhanced MSK CloudWatch monitoring level. Valid values: DEFAULT, PER_BROKER, and PER_TOPIC_PER_BROKER string "DEFAULT" no
environment Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT' string null no
firehose_delivery_stream Name of the Kinesis Data Firehose delivery stream to deliver logs to string "" no
firehose_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Kinesis Data Firehose bool false no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for default, which is 0.
Does not affect id_full.
number null no
jmx_exporter_enabled Set true to enable the JMX Exporter bool false no
kafka_version The desired Kafka software version string n/a yes
label_key_case The letter case of label keys (tag names) (i.e. name, namespace, environment, stage, attributes) to use in tags.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
list(string) null no
label_value_case The letter case of output label values (also used in tags and id).
Possible values: lower, title, upper and none (no transformation).
Default value: lower.
string null no
name Solution name, e.g. 'app' or 'jenkins' string null no
namespace Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp' string null no
node_exporter_enabled Set true to enable the Node Exporter bool false no
number_of_broker_nodes The desired total number of broker nodes in the kafka cluster. It must be a multiple of the number of specified client subnets. number n/a yes
properties Contents of the server.properties file. Supported properties are documented in the MSK Developer Guide map(string) {} no
regex_replace_chars Regex to replace chars with empty string in namespace, environment, stage and name.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
s3_logs_bucket Name of the S3 bucket to deliver logs to string "" no
s3_logs_enabled Indicates whether you want to enable or disable streaming broker logs to S3 bool false no
s3_logs_prefix Prefix to append to the S3 folder name logs are delivered to string "" no
security_groups List of security group IDs to be allowed to connect to the cluster list(string) [] no
stage Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release' string null no
subnet_ids Subnet IDs for Client Broker list(string) n/a yes
tags Additional tags (e.g. map('BusinessUnit','XYZ') map(string) {} no
vpc_id VPC ID where subnets will be created (e.g. vpc-aceb2723) string n/a yes
zone_id Route53 DNS Zone ID for MSK broker hostnames string null no

Outputs

Name Description
bootstrap_broker_tls A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity to the kafka cluster
bootstrap_brokers A comma separated list of one or more hostname:port pairs of kafka brokers suitable to boostrap connectivity to the kafka cluster
bootstrap_brokers_scram A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity using SASL/SCRAM to the kafka cluster.
cluster_arn Amazon Resource Name (ARN) of the MSK cluster
cluster_name MSK Cluster name
config_arn Amazon Resource Name (ARN) of the configuration
current_version Current version of the MSK Cluster used for updates
hostname MSK Cluster Broker DNS hostname
latest_revision Latest revision of the configuration
security_group_id The ID of the security group rule
security_group_name The name of the security group rule
zookeeper_connect_string A comma separated list of one or more hostname:port pairs to use to connect to the Apache Zookeeper cluster

Share the Love

Like this project? Please give it a ★ on our GitHub! (it helps us a lot)

Are you using this project or any of our other projects? Consider leaving a testimonial. =)

Related Projects

Check out these related projects.

References

For additional context, refer to some of these links.

  • Terraform Standard Module Structure - HashiCorp's standard module structure is a file and directory layout we recommend for reusable modules distributed in separate repositories.
  • Terraform Module Requirements - HashiCorp's guidance on all the requirements for publishing a module. Meeting the requirements for publishing a module is extremely easy.
  • Terraform random_integer Resource - The resource random_integer generates random values from a given range, described by the min and max attributes of a given resource.
  • Terraform Version Pinning - The required_version setting can be used to constrain which versions of the Terraform CLI can be used with your configuration

Help

Got a question? We got answers.

File a GitHub issue, send us an email or join our Slack Community.

README Commercial Support

DevOps Accelerator for Startups

We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.

Learn More

Work directly with our team of DevOps experts via email, slack, and video conferencing.

We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
  • Site Reliability Engineering. You'll have total visibility into your apps and microservices.
  • Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
  • GitOps. You'll be able to operate your infrastructure via Pull Requests.
  • Training. You'll receive hands-on training so your team can operate what we build.
  • Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
  • Troubleshooting. You'll get help to triage when things aren't working.
  • Code Reviews. You'll receive constructive feedback on Pull Requests.
  • Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.

Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

Discourse Forums

Participate in our Discourse Forums. Here you'll find answers to commonly asked questions. Most questions will be related to the enormous number of projects we support on our GitHub. Come here to collaborate on answers, find solutions, and get ideas about the products and services we value. It only takes a minute to get started! Just sign in with SSO using your GitHub account.

Newsletter

Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.

Office Hours

Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!

zoom

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Developing

If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Fork the repo on GitHub
  2. Clone the project to your own machine
  3. Commit changes to your own branch
  4. Push your work back up to your fork
  5. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

Copyrights

Copyright © 2020-2021 Cloud Posse, LLC

License

License

See LICENSE for full details.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.

About

This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!

Cloud Posse

We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.

We offer paid support on all of our projects.

Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.

Contributors

Erik Osterman
Erik Osterman
Hugo Samayoa
Hugo Samayoa

README Footer Beacon

About

Terraform module to provision AWS MSK

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 83.2%
  • Makefile 10.8%
  • Go 6.0%