A Terraform plugin for managing Apache Kafka.
terraform-provider-kafka
is available on the terraform registry. To install, add
the below into your main.tf
and execute terraform init
terraform {
required_providers {
kafka = {
source = "Mongey/kafka"
}
}
}
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
ca_cert = file("../secrets/ca.crt")
client_cert = file("../secrets/terraform-cert.pem")
client_key = file("../secrets/terraform.pem")
tls_enabled = true
}
Otherwise, install by downloading and extracting the latest
release to
your terraform plugin directory (typically ~/.terraform.d/plugins/
)
- Install go
- Clone repository to:
$GOPATH/src/github.com/Mongey/terraform-provider-kafka
mkdir -p $GOPATH/src/github.com/Mongey/terraform-provider-kafka; cd $GOPATH/src/github.com/Mongey/ git clone https://github.com/Mongey/terraform-provider-kafka.git cd terraform-provider-kafka
- Build the provider
make build
- Run the tests
make test
- Start a TLS enabled kafka-cluster
docker-compose up
- Run the acceptance tests
make testacc
Example provider with TLS client authentication.
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
ca_cert = file("../secrets/ca.crt")
client_cert = file("../secrets/terraform-cert.pem")
client_key = file("../secrets/terraform.pem")
tls_enabled = true
}
Property | Description | Default |
---|---|---|
bootstrap_servers |
A list of host:port addresses that will be used to discover the full set of alive brokers | Required |
ca_cert |
The CA certificate or path to a CA certificate file in PEM format to validate the server's certificate. |
"" |
client_cert |
The client certificate or path to a file containing the client certificate in PEM format. Use for Client authentication to Kafka.If you have Intermediate CA certificate(s) append them to client_cert . |
"" |
client_key |
The private key or path to a file containing the private key that the client certificate was issued for. | "" |
client_key_passphrase |
The passphrase for the private key that the certificate was issued for. | "" |
tls_enabled |
Enable communication with the Kafka Cluster over TLS. | true |
skip_tls_verify |
Skip TLS verification. | false |
sasl_username |
Username for SASL authentication. | "" |
sasl_password |
Password for SASL authentication. | "" |
sasl_mechanism |
Mechanism for SASL authentication. Allowed values are plain, scram-sha512 and scram-sha256 | plain |
A resource for managing Kafka topics. Increases partition count without destroying the topic.
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
}
resource "kafka_topic" "logs" {
name = "systemd_logs"
replication_factor = 2
partitions = 100
config = {
"segment.ms" = "20000"
"cleanup.policy" = "compact"
}
}
Property | Description |
---|---|
name |
The name of the topic |
partitions |
The number of partitions the topic should have |
replication_factor |
The number of replicas the topic should have |
config |
A map of string K/V attributes |
You can import topics with the following
terraform import kafka_topic.logs systemd_logs
A resource for managing Kafka ACLs.
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
ca_cert = file("../secrets/ca.crt")
client_cert = file("../secrets/terraform-cert.pem")
client_key = file("../secrets/terraform.pem")
}
resource "kafka_acl" "test" {
resource_name = "syslog"
resource_type = "Topic"
acl_principal = "User:Alice"
acl_host = "*"
acl_operation = "Write"
acl_permission_type = "Deny"
}
Property | Description | Valid values |
---|---|---|
acl_principal |
Principal that is being allowed or denied | * |
acl_host |
Host from which principal listed in acl_principal will have access | * |
acl_operation |
Operation that is being allowed or denied | Unknown , Any , All , Read , Write , Create , Delete , Alter , Describe , ClusterAction , DescribeConfigs , AlterConfigs , IdempotentWrite |
acl_permission_type |
Type of permission | Unknown , Any , Allow , Deny |
resource_name |
The name of the resource | * |
resource_type |
The type of resource | Unknown , Any , Topic , Group , Cluster , TransactionalID |
resource_pattern_type_filter |
Prefixed , Any , Match , Literal |
For import, use as a parameter the items separated by |
character. Quote it to avoid shell expansion.
# Fields in shell notation are
# ${acl_principal}|${acl_host}|${acl_operation}|${acl_permission_type}|${resource_type}|${resource_name}|${resource_pattern_type_filter}
terraform import kafka_acl.admin 'User:12345|*|Describe|Allow|Topic|experimental-topic|Prefixed'
A resource for managing Kafka Quotas.
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
ca_cert = file("../secrets/ca.crt")
client_cert = file("../secrets/terraform-cert.pem")
client_key = file("../secrets/terraform.pem")
}
resource "kafka_quota" "test" {
entity_name = "client1"
entity_type = "client-id"
config = {
"consumer_byte_rate" = "4000000"
"producer_byte_rate" = "3500000"
}
}
Property | Description |
---|---|
entity_name |
The name of the entity |
entity_type |
The entity type (client-id, user, ip) |
config |
A map of string attributes for the entity |
A resource for managing Kafka SCRAM user credentials.
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
ca_cert = file("../secrets/ca.crt")
client_cert = file("../secrets/terraform-cert.pem")
client_key = file("../secrets/terraform.pem")
}
resource "kafka_user_scram_credential" "test" {
username = "user1"
scram_mechanism = "SCRAM-SHA-256"
scram_iterations = "8192"
password = "password"
}
Property | Description |
---|---|
username |
The username |
scram_mechanism |
The SCRAM mechanism (SCRAM-SHA-256 or SCRAM-SHA-512) |
scram_iterations |
The number of SCRAM iterations (must be >= 4096). Default: 4096 |
password |
The password for the user |