Skip to content

Commit

Permalink
Showing 35 changed files with 2,304 additions and 0 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
*.pyc
*.swp
build/
dist/
kapitan.egg-info/
7 changes: 7 additions & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# This is the list of Kapitan authors for copyright purposes.
#
# This does not necessarily list everyone who has contributed code, since in
# some cases, their employer may be the copyright holder. To see the full list
# of contributors, see the revision history in source control.

DeepMind Technologies Ltd.
23 changes: 23 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# How to Contribute

We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.

## Contributor License Agreement

Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution,
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.

You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.

## Code reviews

All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
202 changes: 202 additions & 0 deletions LICENCE
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@

Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.

"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:

(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and

(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.

You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
202 changes: 202 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@

Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.

"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:

(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and

(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.

You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
315 changes: 315 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,315 @@
# Kapitan: A generic jsonnet based configuration deployment system

Kapitan is a command line tool for declaring, instrumenting and documenting
infrastructure with the goal of writing reusable components in Kubernetes whilst avoiding
duplication and promoting conventions and patterns for extensibility.

## Main Features

* Jsonnet data templating
* Targets
* Inventory
* Jinja2 jsonnet templating

## Getting Started

The instructions below will help you getting started with a copy of Kapitan on
your local machine so you can start using and developing it.

### Prerequisites

Kapitan needs Python 2.7+ and the libraries in requirements.txt.

Run setup.py to build and install in your env:

```
$ python setup.py install
```

# Usage

For the example below, compiling the minikube-es target in the
_examples_ directory

```
$ cd examples/
$ kapitan compile -f targets/minikube-es/target.json
Wrote compiled/minikube-es/manifests/es-master.yml
Wrote compiled/minikube-es/manifests/es-elasticsearch-svc.yml
Wrote compiled/minikube-es/manifests/es-discovery-svc.yml
Wrote compiled/minikube-es/manifests/es-client.yml
Wrote compiled/minikube-es/manifests/es-data.yml
Wrote compiled/minikube-es/scripts/setup.sh with mode 0740
Wrote compiled/minikube-es/scripts/kubectl.sh with mode 0740
Wrote compiled/minikube-es/docs/README.md with mode 0640
```

Compiled manifests will be available at:

```
compiled/minikube-es
```

An alternative output path can be set with ```--output-path=/path/for/compiled```

Example:

```
$ find compiled/
compiled/
compiled/minikube-es
compiled/minikube-es/scripts
compiled/minikube-es/scripts/kubectl.sh
compiled/minikube-es/scripts/setup.sh
compiled/minikube-es/docs
compiled/minikube-es/docs/README.md
compiled/minikube-es/manifests
compiled/minikube-es/manifests/es-client.yml
compiled/minikube-es/manifests/es-data.yml
compiled/minikube-es/manifests/es-master.yml
compiled/minikube-es/manifests/es-elasticsearch-svc.yml
compiled/minikube-es/manifests/es-discovery-svc.yml
```

All manifests can then be applied at once using kubectl:

```
$ kubectl --context=minikube apply --dry-run -f compiled/minikube-es/manifests/
```

Or using the compiled kubectl.sh which defaults to the example target:

```
$ compiled/minikube-es/scripts/kubectl.sh apply --dry-run -f compiled/minikube-es/manifests/
statefulset "cluster-client" configured (dry run)
statefulset "cluster-data" configured (dry run)
service "elasticsearch-discovery" configured (dry run)
service "elasticsearch" configured (dry run)
statefulset "cluster-master" configured (dry run)
```

Get this target's pods:

```
$ ./compiled/minikube-es/scripts/kubectl.sh get pods -w
NAME READY STATUS RESTARTS AGE
cluster-client-0 1/1 Running 0 50s
cluster-data-0 1/1 Running 0 20s
cluster-master-0 1/1 Running 0 59s
```

## Feature Overview

### Jsonnet data templating

Kapitan builds on top of the [Jsonnet](https://jsonnet.org) language, _a domain specific configuration language
that helps you define JSON data._

### Targets

Targets define what to compile with what parameters and where.

Kapitan requires a target file to compile a target and its parameters.

The target snippet below will compile the jsonnet file at _targets/minikube-es/main.jsonnet_ with target and namespace _minikube-es_.

```
{
"version": 1,
"vars": {
"target": "minikube-es",
"namespace": "minikube-es"
},
"compile": [
{
"name": "manifests",
"type": "jsonnet",
"path": "targets/minikube-es/main.jsonnet",
"output": "yaml"
}
]
}
```

The compiled files will be written in YAML format into the "manifests" directory:

```
$ find compiled/
compiled/
compiled/minikube-es
compiled/minikube-es/manifests
compiled/minikube-es/manifests/es-client.yml
compiled/minikube-es/manifests/es-data.yml
compiled/minikube-es/manifests/es-master.yml
compiled/minikube-es/manifests/es-elasticsearch-svc.yml
compiled/minikube-es/manifests/es-discovery-svc.yml
```

Other compile types can be specified, like _jinja2_:

```
{
"version": 1,
"vars": {
"target": "minikube-es",
"namespace": "minikube-es"
},
"compile": [
{
"name": "manifests",
"type": "jsonnet",
"path": "targets/minikube-es/main.jsonnet",
"output": "yaml"
},
{
"name": "docs",
"type": "jinja2",
"path": "targets/minikube-es/docs"
}
]
}
```

Compiling this target will create directories _manifests_ and _docs_:

```
$ find compiled/
compiled/
compiled/minikube-es
compiled/minikube-es/docs
compiled/minikube-es/docs/README.md
compiled/minikube-es/manifests
compiled/minikube-es/manifests/es-client.yml
compiled/minikube-es/manifests/es-data.yml
compiled/minikube-es/manifests/es-master.yml
compiled/minikube-es/manifests/es-elasticsearch-svc.yml
compiled/minikube-es/manifests/es-discovery-svc.yml
```


### Inventory

The inventory allows you to define values in a hierarchical way. The current implementation uses [reclass](https://github.com/madduck/reclass), a library that provides a simple way to classify nodes (targets).

By default, Kapitan will look for an _inventory/_ directory to render the inventory from.

#### Inventory Classes

Classifying almost anything will help you avoid repetition (DRY) and will force you to organise parameters hierarchically.

For example, the snipppet below, taken from the example elasticsearch class, declares
what parameters are needed for the elasticsearch component:

```
$ cat inventory/classes/app/elasticsearch.yml
parameters:
elasticsearch:
image: "quay.io/pires/docker-elasticsearch-kubernetes:5.5.0"
java_opts: "-Xms512m -Xmx512m"
replicas: 1
masters: 1
roles:
master:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
data:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
...
```

#### Inventory Targets

The example inventory target _minikube-es_ includes the elasticsearch and minikube classes
as a way to say "I want the minikube-es target with the parameters defined in
the elasticsearch app and minikube cluster classes":

```
$ cat inventory/targets/minikube-es.yml
classes:
- cluster.minikube
- app.elasticsearch
```

#### Having a look

Rendering the inventory for the _minikube-es_ target:

```
$ kapitan inventory -t minikube-es
...
classes:
- cluster.minikube
- app.elasticsearch
environment: base
parameters:
cluster:
name: minikube
user: minikube
elasticsearch:
image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0
java_opts: -Xms512m -Xmx512m
masters: 1
replicas: 1
roles:
client:
image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0
java_opts: -Xms512m -Xmx512m
masters: 1
replicas: 1
data:
image: quay.io/pires/docker-elasticsearch-kubernetes:5.5.0
java_opts: -Xms512m -Xmx512m
masters: 1
replicas: 1
...
```


#### Inventory in jsonnet

Accessing the inventory from jsonnet compile types requires you to import "kapitan.libjsonnet" located in the jsonnet/ directory, which includes the native_callback functions glueing reclass to jsonnet (amongst others).

The jsonnet snippet below imports the inventory for the target you're compiling
and returns the java_opts for the elasticsearch data role:

```
local kap = import "lib/kapitan.libjsonnet";
inventory = kap.inventory();
{
"data_java_opts": inventory.parameters.elasticsearch.roles.data.java_opts,
}
```

#### Inventory in jinja2

Jinja2 types will pass the "inventory" and whatever target vars as context keys in your template.

This snippet renders the same java_opts for the elasticsearch data role:

```
java_opts for elasticsearch data role are: {{ inventory.parameters.elasticsearch.roles.data.java_opts }}
```


### Jinja2 jsonnet templating

Such as reading the inventory within jsonnet, Kapitan also provides a function to render a Jinja2 template file. Again, importing "kapitan.jsonnet" is needed.

The jsonnet snippet renders the jinja2 template in templates/got.j2:

```
local kap = import "lib/kapitan.libjsonnet";
{
"jon_snow": kap.jinja2_template("templates/got.j2", { is_dead: false }),
}
```

It's up to you to decide what the output is.
9 changes: 9 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
## Description

Kapitan is a tool to manage kubernetes configuration using jsonnet templates


## Additional links

https://jsonnet.org
https://github.com/databricks/jsonnet-style-guide
4 changes: 4 additions & 0 deletions bin/kapitan
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/bash

BINPATH=`dirname $0`
PYTHONPATH="$BINPATH/../:$PYTHONPATH" python -m kapitan $@
21 changes: 21 additions & 0 deletions examples/components/elasticsearch/elasticsearch.container.jsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
local kube = import "lib/kube.libjsonnet";

{
Container(role, image):: kube.Container(role) {
local transportPort = {
name: "transport",
protocol: "TCP",
containerPort: 9300
},

local clientPort = {
name: "client",
protocol: "TCP",
containerPort: 9200
},

image: image,

ports: (if role == "client" then [clientPort, transportPort] else [transportPort])
},
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
local kube = import "lib/kube.libjsonnet";
local kap = import "lib/kapitan.libjsonnet";
local containers = import "./elasticsearch.container.jsonnet";
local capabilities = ["IPC_LOCK", "SYS_RESOURCE"];
local inv = kap.inventory();

local init_container = {
name: "sysctl",
image: "busybox",
imagePullPolicy: "IfNotPresent",
command: ["sysctl", "-w", "vm.max_map_count=262144"],
securityContext: {
privileged: true,
},
};

{
local inv_base = inv.parameters.elasticsearch,
local base = {
StatefulSet(name, role): kube.StatefulSet(name) {
Args:: {
Name:: name,
Role:: role,
ClusterName:: error "please define a cluster name",
Image:: inv_base.roles[role].image,
Replicas:: inv_base.roles[role].replicas,
NumberOfMasters:: inv_base.roles[role].masters,
JavaOPTS:: inv_base.roles[role].java_opts,
Envs:: {
NAMESPACE: { fieldRef: { fieldPath: "metadata.namespace" } },
NODE_NAME: { fieldRef: { fieldPath: "metadata.name" } },
CLUSTER_NAME: args.ClusterName,
NUMBER_OF_MASTERS: args.NumberOfMasters,
NODE_MASTER: "false",
NODE_INGEST: "false",
NODE_DATA: "false",
HTTP_ENABLE: "false",
ES_JAVA_OPTS: args.JavaOPTS,
},
},
local args = self.Args,

metadata+: {
labels+: { role: role } },
spec+: {
replicas: args.Replicas,
template+: {
spec+: {
initContainers+: [init_container],
containers_+: {
role: containers.Container(args.Role, args.Image) + $.SecurityContext(false, capabilities) { env_+: args.Envs },
},
},
},
},
},

},
SecurityContext(privileged=true, cap_add=[], cap_remove=[]): {
securityContext+: {
privileged: privileged,
capabilities+: {
add: cap_add,
remove: cap_remove,
},
},
},

MasterNode(name):: base.StatefulSet(name, "master") {
Args+:: { Envs+:: { NODE_MASTER: "true" } },
},

DataNode(name):: base.StatefulSet(name, "data") {
Args+:: { Envs+:: { NODE_DATA: "true" } },
},

ClientNode(name):: base.StatefulSet(name, "client") {
Args+:: { Envs+:: { HTTP_ENABLE: "true" } },
},

IngestNode(name):: base.StatefulSet(name, "ingest") {
Args+:: { Envs+:: { NODE_INGEST: "true" } },
},
}
27 changes: 27 additions & 0 deletions examples/inventory/classes/app/elasticsearch.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
parameters:
elasticsearch:
image: "quay.io/pires/docker-elasticsearch-kubernetes:5.5.0"
java_opts: "-Xms512m -Xmx512m"
replicas: 1
masters: 1
roles:
master:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
data:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
client:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
ingest:
image: ${elasticsearch:image}
java_opts: ${elasticsearch:java_opts}
replicas: ${elasticsearch:replicas}
masters: ${elasticsearch:masters}
Empty file.
13 changes: 13 additions & 0 deletions examples/inventory/classes/cluster/minikube.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# class for all targets deployed on minikube
#
classes:
- cluster.common

parameters:
cluster:
name: minikube
user: minikube
vault:
address: https://localhost:8200
mysql:
hostname: localhost
6 changes: 6 additions & 0 deletions examples/inventory/targets/minikube-es.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
classes:
- cluster.minikube
- app.elasticsearch

parameters:
namespace: minikube-es
5 changes: 5 additions & 0 deletions examples/lib/kapitan.libjsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
jinja2_template(name, ctx):: std.native("jinja2_render_file")(name, std.toString(ctx)),
file_read(name):: std.native("file_read")(name),
inventory(target=std.extVar("target"), inv_path="inventory/"):: std.native("inventory")(target, inv_path),
}
388 changes: 388 additions & 0 deletions examples/lib/kube.libjsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,388 @@
// Based on https://github.com/anguslees/kubecfg/blob/master/examples/lib/kube.libsonnet
//
// Generic library of Kubernetes objects
// TODO: Expand this to include all API objects.
//
// Should probably fill out all the defaults here too, so jsonnet can
// reference them. In addition, jsonnet validation is more useful
// (client-side, and gives better line information).

{
// Returns array of values from given object. Does not include hidden fields.
objectValues(o):: [o[field] for field in std.objectFields(o)],

// Returns array of [key, value] pairs from given object. Does not include hidden fields.
objectItems(o):: [[k, o[k]] for k in std.objectFields(o)],

// Replace all occurrences of `_` with `-`.
hyphenate(s):: std.join("-", std.split(s, "_")),

// Convert {foo: {a: b}} to [{name: foo, a: b}]
mapToNamedList(o):: [{ name: $.hyphenate(n) } + o[n] for n in std.objectFields(o)],

// Convert from SI unit suffixes to regular number
siToNum(n):: (
local convert =
if std.endsWith(n, "m") then [1, 0.001]
else if std.endsWith(n, "K") then [1, 1e3]
else if std.endsWith(n, "M") then [1, 1e6]
else if std.endsWith(n, "G") then [1, 1e9]
else if std.endsWith(n, "T") then [1, 1e12]
else if std.endsWith(n, "P") then [1, 1e15]
else if std.endsWith(n, "E") then [1, 1e18]
else if std.endsWith(n, "Ki") then [2, std.pow(2, 10)]
else if std.endsWith(n, "Mi") then [2, std.pow(2, 20)]
else if std.endsWith(n, "Gi") then [2, std.pow(2, 30)]
else if std.endsWith(n, "Ti") then [2, std.pow(2, 40)]
else if std.endsWith(n, "Pi") then [2, std.pow(2, 50)]
else if std.endsWith(n, "Ei") then [2, std.pow(2, 60)]
else error "Unknown numerical suffix in " + n;
local n_len = std.length(n);
std.parseInt(std.substr(n, 0, n_len - convert[0])) * convert[1]
),

_Object(apiVersion, kind, name):: {
apiVersion: apiVersion,
kind: kind,
metadata: {
name: name,
labels: { name: name },
namespace: std.extVar("namespace"),
annotations: {},
},
},

Endpoints(name): $._Object("v1", "Endpoints", name) {
Ip(addr):: { ip: addr },
Port(p):: { port: p },

subsets: [],
},

Service(name): $._Object("v1", "Service", name) {
local service = self,

target_pod:: error "service target_pod required",
target_container_name:: error "service target_container_name required",
target_container:: [c for c in self.target_pod.spec.containers if c["name"] == self.target_container_name][0],
type:: "ClusterIP",
ports_:: [
{
name: p["name"],
port: p["containerPort"],
targetPort: p["name"],
} for p in self.target_container.ports
],

// Helpers that format host:port in various ways
http_url:: "http://%s.%s:%s/" % [
self.metadata.name, self.metadata.namespace, self.spec.ports[0].port,
],
proxy_urlpath:: "/api/v1/proxy/namespaces/%s/services/%s/" % [
self.metadata.namespace, self.metadata.name,
],

spec: {
selector: service.target_pod.metadata.labels,
ports: service.ports_,
type: service.type
},

},

PersistentVolume(name): $._Object("v1", "PersistentVolume", name) {
spec: {},
},

// TODO: This is a terrible name
PersistentVolumeClaimVolume(pvc): {
persistentVolumeClaim: { claimName: pvc.metadata.name },
},

StorageClass(name): $._Object("storage.k8s.io/v1beta1", "StorageClass", name) {
provisioner: error "provisioner required",
},

PersistentVolumeClaim(name): $._Object("v1", "PersistentVolumeClaim", name) {
local pvc = self,

storageClass:: null,
storage:: error "storage required",

metadata+: if pvc.storageClass != null then {
annotations+: {
"volume.beta.kubernetes.io/storage-class": pvc.storageClass,
},
} else {},

spec: {
resources: {
requests: {
storage: pvc.storage,
},
},
accessModes: ["ReadWriteOnce"],
},
},

Container(name): {
name: name,
image: error "container image value required",

envList(map):: [
if std.type(map[x]) == "object" then { name: x, valueFrom: map[x] } else { name: x, value: std.toString(map[x]) }
for x in std.objectFields(map)],

env_:: {},
env: self.envList(self.env_),

imagePullPolicy: "Always",

args_:: {},
args: ["--%s=%s" % kv for kv in $.objectItems(self.args_)],

ports_:: {},
ports: $.mapToNamedList(self.ports_),

volumeMounts_:: {},
volumeMounts: $.mapToNamedList(self.volumeMounts_),

},

Pod(name): $._Object("v1", "Pod", name) {
spec: $.PodSpec,
},

PodSpec: {
containers_:: {},
containers: [{ name: $.hyphenate(name) } + self.containers_[name] for name in std.objectFields(self.containers_)],

initContainers_:: {},
initContainers: [{ name: $.hyphenate(name) } + self.initContainers_[name] for name in std.objectFields(self.initContainers_)],

volumes_:: {},
volumes: $.mapToNamedList(self.volumes_),

imagePullSecrets: [],

assert std.length(self.containers) > 0 : "must have at least one container",
},

EmptyDirVolume(): {
emptyDir: {},
},

HostPathVolume(path): {
hostPath: { path: path },
},

GitRepoVolume(repository, revision): {
gitRepo: {
repository: repository,

// "master" is possible, but should be avoided for production
revision: revision,
},
},

SecretVolume(secret): {
secret: { secretName: secret.metadata.name },
},

ConfigMapVolume(configmap): {
configMap: { name: configmap.metadata.name },
},

ConfigMap(name): $._Object("v1", "ConfigMap", name) {
data: {},

// I keep thinking data values can be any JSON type. This check
// will remind me that they must be strings :(
local nonstrings = [k for k in std.objectFields(self.data)
if std.type(self.data[k]) != "string"],
assert std.length(nonstrings) == 0 : "data contains non-string values: %s" % [nonstrings],
},

// subtype of EnvVarSource
ConfigMapRef(configmap, key): {
assert std.objectHas(configmap.data, key) : "%s not in configmap.data" % [key],
configMapKeyRef: {
name: configmap.metadata.name,
key: key,
},
},

ServiceAccount(name): $._Object("v1", "ServiceAccount", name) {
},

Secret(name): $._Object("v1", "Secret", name) {
local secret = self,

type: "Opaque",
data_:: {},
data: { [k]: std.base64(secret.data_[k]) for k in std.objectFields(secret.data_) },
},

RegistrySecret(name): $.Secret(name) {
type: "kubernetes.io/dockercfg",
},

// subtype of EnvVarSource
SecretKeyRef(secret, key): {
assert std.objectHas(secret.data, key) : "%s not in secret.data" % [key],
secretKeyRef: {
name: secret.metadata.name,
key: key,
},
},

// subtype of EnvVarSource
FieldRef(key): {
fieldRef: {
apiVersion: "v1",
fieldPath: key,
},
},

// subtype of EnvVarSource
ResourceFieldRef(key): {
resourceFieldRef: {
resource: key,
divisor_:: 1,
divisor: std.toString(self.divisor_),
},
},

Deployment(name): $._Object("extensions/v1beta1", "Deployment", name) {
local deployment = self,

spec: {
template: {
spec: $.PodSpec,
metadata: {
labels: deployment.metadata.labels,
annotations: {},
},
},
/* COMMENT temp removed (ramaro)
COMMENTING THE NEXT 25 LINES WHILE WE MIGRATE
kube-templates to health-templates.
This makes it easier to compare current jinja manifests that don't have the blocks below
with kapitan compiled manifests that do
*/
strategy: {
type: "RollingUpdate",

local pvcs = [v for v in deployment.spec.template.spec.volumes
if std.objectHas(v, "persistentVolumeClaim")],
local is_stateless = std.length(pvcs) == 0,

// Apps trying to maintain a majority quorum or similar will
// want to tune these carefully.
// NB: Upstream default is surge=1 unavail=1
rollingUpdate: if is_stateless then {
maxSurge: "25%", // rounds up
maxUnavailable: "25%", // rounds down
} else {
// Poor-man's StatelessSet. Useful mostly with replicas=1.
maxSurge: 0,
maxUnavailable: 1,
},
},

// NB: Upstream default is 0
minReadySeconds: 30,

// NB: Regular k8s default is to keep all revisions
revisionHistoryLimit: 10,
// PREV COMMENT
replicas: 1,
assert self.replicas >= 1,
},
},

CrossVersionObjectReference(target): {
apiVersion: target.apiVersion,
kind: target.kind,
name: target.metadata.name,
},

HorizontalPodAutoscaler(name): $._Object("autoscaling/v1", "HorizontalPodAutoscaler", name) {
local hpa = self,

target:: error "target required",

spec: {
scaleTargetRef: $.CrossVersionObjectReference(hpa.target),

minReplicas: hpa.target.spec.replicas,
maxReplicas: error "maxReplicas required",

assert self.maxReplicas >= self.minReplicas,
},
},

StatefulSet(name): $._Object("apps/v1beta1", "StatefulSet", name) {
local sset = self,

annotations:: {},

spec: {
serviceName: name,

template: {
spec: $.PodSpec,
metadata: {
labels: sset.metadata.labels,
annotations: sset.annotations,
},
},

volumeClaimTemplates_:: {},
volumeClaimTemplates: [$.PersistentVolumeClaim(kv[0]) + kv[1] for kv in $.objectItems(self.volumeClaimTemplates_)],

replicas: 1,
assert self.replicas >= 1,
},
},

Job(name): $._Object("batch/v1", "Job", name) {
local job = self,

spec: {
template: {
spec: $.PodSpec {
restartPolicy: "OnFailure",
},
metadata: {
labels: job.metadata.labels,
annotations: {},
},
},

completions: 1,
parallelism: 1,
},
},

DaemonSet(name): $._Object("extensions/v1beta1", "DaemonSet", name) {
local ds = self,
spec: {
template: {
metadata: {
labels: ds.metadata.labels,
annotations: {},
},
spec: $.PodSpec,
},
},
},

Ingress(name): $._Object("extensions/v1beta1", "Ingress", name) {
spec: {},
},

Namespace(name): $._Object("v1", "Namespace", name) {
spec: {},
},
}
13 changes: 13 additions & 0 deletions examples/lib/utils.libjsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
/* This file will contain generic utilities mostly inspired by GCL */


/*
Converts the attribute value pairs in the given scope into an argument
string by formatting them for each {attr: value} in the args.
*/
mkargs(args, prefix="--", assign="="):: [
std.format("%s%s%s%s", [prefix, attr, assign, args[attr]])
for attr in std.objectFields(args)
],
}
22 changes: 22 additions & 0 deletions examples/scripts/kubectl.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash -eu
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#!/bin/bash
{% set i = inventory.parameters %}
KUBECTL="kubectl --context {{i.namespace}}"

${KUBECTL} $@

23 changes: 23 additions & 0 deletions examples/scripts/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/bin/bash -eu
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


#!/bin/bash
{% set i = inventory.parameters %}
{% set cluster = i.cluster %}

kubectl config set-context {{i.namespace}} --cluster {{cluster.name}} --user {{cluster.user}} --namespace {{i.namespace}}
kubectl --context {{i.namespace}} create namespace {{i.namespace}}
62 changes: 62 additions & 0 deletions examples/targets/minikube-es/docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Elasticsearch Minikube

This is a specific version of Elasticsearch to run on a minikube instalation.

## Prerequisites

Elasticsearch is a resource hungry application, for this setup we require
that minikube is running with the above options:

```
$ minikube start --insecure-registry https://quay.io --memory=4096 --cpus=2
```

_If_ you have created the minikube VM previously, you will most likely need to
delete the vm and recreate it with more memory/cpu. (i.e.
```$ minikube delete```)

## Setting up

Assuming you're already running Minikube, setup for this target:

```
$ scripts/setup.sh
```

This will create a context in your minikube cluster called {{ target }}.


Apply the compiled manifests:

```
$ scripts/kubectl.sh apply -f manifests/
```

If the commands above did not error, you should be good to go.

Let's confirm everything is up:

```
$ scripts/kubectl.sh get pods -w
```

## Connecting to Elasticsearch

List the elasticsearch service endpoints running in the cluster:

```
$ minikube service -n {{ inventory.parameters.namespace }} elasticsearch --url
```

and curl the health endpoint, i.e.:
```$ curl http://192.168.99.100:32130/_cluster/health?pretty```


## Deleting Elasticsearch

Deleting is easy (warning, this will remove _everything_):

```
$ scripts/kubectl.sh delete -f manifests/
```

24 changes: 24 additions & 0 deletions examples/targets/minikube-es/main.jsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
local elasticsearch = import "components/elasticsearch/elasticsearch.statefulset.jsonnet";
local kube = import "lib/kube.libjsonnet";
local kap = import "lib/kapitan.libjsonnet";
local inv = kap.inventory();

local es = {
Cluster(name):: {
local c = self,
Name:: name,

local args = {
Args+:: { ClusterName:: c.Name },
},
"es-master": elasticsearch.MasterNode(self.Name + "-master") + args,
"es-data": elasticsearch.DataNode(self.Name + "-data") + args,
"es-client": elasticsearch.ClientNode(self.Name + "-client") + args,
"es-discovery-svc": kube.Service("elasticsearch-discovery") { target_pod:: c["es-master"].spec.template, target_container_name:: "master" },
"es-elasticsearch-svc": kube.Service("elasticsearch") { target_pod:: c["es-client"].spec.template, target_container_name:: "client", type: "NodePort" },
},

cluster: $.Cluster("cluster"),
};

std.prune(es.cluster)
25 changes: 25 additions & 0 deletions examples/targets/minikube-es/target.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
{
"version": 1,
"vars": {
"target": "minikube-es",
"namespace": "minikube-es"
},
"compile": [
{
"name": "manifests",
"type": "jsonnet",
"path": "targets/minikube-es/main.jsonnet",
"output": "yaml"
},
{
"name": "scripts",
"type": "jinja2",
"path": "scripts"
},
{
"name": "docs",
"type": "jinja2",
"path": "targets/minikube-es/docs"
}
]
}
16 changes: 16 additions & 0 deletions examples/targets/minikube-es/target.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
version: 1
vars:
target: minikube-es
namespace: minikube-es
compile:
- name: manifests
type: jsonnet
path: targets/minikube-es/main.jsonnet
output: yaml
- name: scripts
type: jinja2
path: scripts
- name: docs
type: jinja2
path: targets/minikube-es/docs
1 change: 1 addition & 0 deletions examples/templates/got.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
is dead {{ is_dead }}
5 changes: 5 additions & 0 deletions jsonnet/kapitan.libjsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
jinja2_template(name, ctx):: std.native("jinja2_render_file")(name, std.toString(ctx)),
file_read(name):: std.native("file_read")(name),
inventory(target=std.extVar("target"), inv_path="inventory/"):: std.native("inventory")(target, inv_path),
}
17 changes: 17 additions & 0 deletions kapitan/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


19 changes: 19 additions & 0 deletions kapitan/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from kapitan import cli

cli.main()
143 changes: 143 additions & 0 deletions kapitan/cli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"command line module"

import argparse
import json
import logging
import os
import sys
import yaml
import multiprocessing
from functools import partial

from kapitan.utils import jsonnet_file, PrettyDumper, flatten_dict, searchvar
from kapitan.targets import compile_target_file
from kapitan.resources import search_imports, resource_callbacks, inventory_reclass

logger = logging.getLogger(__name__)


def main():
"main function for command line usage"
parser = argparse.ArgumentParser(prog='kapitan',
description="Define and deploy apps into Kubernetes")
subparser = parser.add_subparsers(help="commands")

eval_parser = subparser.add_parser('eval', help='evaluate jsonnet file')
eval_parser.add_argument('jsonnet_file', type=str)
eval_parser.add_argument('--output', type=str,
choices=('yaml', 'json'), default='yaml',
help='set output format, default is "yaml"')
eval_parser.add_argument('--vars', type=str, default=[], nargs='*',
metavar='VAR',
help='set variables')
eval_parser.add_argument('--search-path', '-J', type=str, default='.',
metavar='JPATH',
help='set search path, default is "."')

compile_parser = subparser.add_parser('compile', help='compile target files')
compile_parser.add_argument('--target-file', '-f', type=str, nargs='+', default=[],
metavar='TARGET', help='target files')
compile_parser.add_argument('--search-path', '-J', type=str, default='.',
metavar='JPATH',
help='set search path, default is "."')
compile_parser.add_argument('--verbose', '-v', help='set verbose mode',
action='store_true', default=False)
compile_parser.add_argument('--no-prune', help='do not prune jsonnet output',
action='store_true', default=False)
compile_parser.add_argument('--quiet', help='set quiet mode, only critical output',
action='store_true', default=False)
compile_parser.add_argument('--output-path', type=str, default='compiled',
metavar='PATH',
help='set output path, default is "./compiled"')
compile_parser.add_argument('--parallelism', '-p', type=int,
default=4, metavar='INT',
help='Number of concurrent compile processes, default is 4')

inventory_parser = subparser.add_parser('inventory', help='show inventory')
inventory_parser.add_argument('--target-name', '-t', default='',
help='set target name, default is all targets')
inventory_parser.add_argument('--inventory-path', default='./inventory',
help='set inventory path, default is "./inventory"')
inventory_parser.add_argument('--flat', '-F', help='flatten nested inventory variables',
action='store_true', default=False)

searchvar_parser = subparser.add_parser('searchvar',
help='show all inventory files where var is declared')
searchvar_parser.add_argument('searchvar', type=str, metavar='VARNAME',
help='flattened full variable name. Example: ' +
'parameters.cluster.type')
searchvar_parser.add_argument('--inventory-path', default='./inventory',
help='set inventory path, default is "./inventory"')

args = parser.parse_args()

logger.debug('Running with args: %s', args)

cmd = sys.argv[1]
if cmd == 'eval':
file_path = args.jsonnet_file
search_path = os.path.abspath(args.search_path)
ext_vars = {}
if args.vars:
ext_vars = dict(var.split('=') for var in args.vars)
json_output = None
_search_imports = lambda cwd, imp: search_imports(cwd, imp, search_path)
json_output = jsonnet_file(file_path, import_callback=_search_imports,
native_callbacks=resource_callbacks(search_path),
ext_vars=ext_vars)
if args.output == 'yaml':
json_obj = json.loads(json_output)
yaml_output = yaml.safe_dump(json_obj, default_flow_style=False)
print yaml_output
elif json_output:
print json_output
elif cmd == 'compile':
if args.verbose:
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
elif args.quiet:
logging.basicConfig(level=logging.CRITICAL, format="%(message)s")
else:
logging.basicConfig(level=logging.INFO, format="%(message)s")
search_path = os.path.abspath(args.search_path)
if args.target_file:
pool = multiprocessing.Pool(args.parallelism)
worker = partial(compile_target_file,
search_path=search_path,
output_path=args.output_path,
prune=(not args.no_prune))
try:
pool.map(worker, args.target_file)
except RuntimeError as e:
# if compile worker fails, terminate immediately
pool.terminate()
raise
else:
logger.error("Nothing to compile")
elif cmd == 'inventory':
inv = inventory_reclass(args.inventory_path)
if args.target_name != '':
inv = inv['nodes'][args.target_name]
if args.flat:
inv = flatten_dict(inv)
print yaml.dump(inv, width=10000)
else:
print yaml.dump(inv, Dumper=PrettyDumper, default_flow_style=False)
elif cmd == 'searchvar':
searchvar(args.searchvar, args.inventory_path)
155 changes: 155 additions & 0 deletions kapitan/resources.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"kapitan resources"

import errno
from functools import partial
import json
import logging
import os
import reclass
import reclass.core
import yaml

from kapitan.utils import render_jinja2_file, memoize

logger = logging.getLogger(__name__)

def resource_callbacks(search_path):
"""
Returns a dict with all the functions to be used
on the native_callbacks keyword on jsonnet
search_path can be used by the native functions to access files
"""

return {"jinja2_render_file": (("name", "ctx"),
partial(jinja2_render_file, search_path)),
"inventory": (("target", "inv_path"),
partial(inventory, search_path)),
"file_read": (("name",),
partial(read_file, search_path)),
}


def jinja2_render_file(search_path, name, ctx):
"""
Render jinja2 file name with context ctx.
search_path is used to find the file name
as there is a limitation with jsonnet's native_callback approach:
one can't access the current directory being evaluated
"""
ctx = json.loads(ctx)
_full_path = os.path.join(search_path, name)
logger.debug("jinja2_render_file trying file %s", _full_path)
if os.path.exists(_full_path):
logger.debug("jinja2_render_file found file at %s", _full_path)
return render_jinja2_file(_full_path, ctx)
# default IOError if we reach here
raise IOError("Could not find file %s" % name)

def read_file(search_path, name):
"return content of file in name"
full_path = os.path.join(search_path, name)
logger.debug("read_file trying file %s", full_path)
if os.path.exists(full_path):
logger.debug("read_file found file at %s", full_path)
with open(full_path) as f:
return f.read()
raise IOError("Could not find file %s" % name)

def search_imports(cwd, import_str, search_path):
"""
This is only to be used as a callback for the jsonnet API!
- cwd is the import context $CWD,
- import_str is the "bla.jsonnet" in 'import "bla.jsonnet"'
- search_path is the location where to look for import_str if not in cwd
The only supported parameters are cwd and import_str, so search_path
needs to be closured.
"""
basename = os.path.basename(import_str)
full_import_path = os.path.join(cwd, import_str)

# if import_str not found, search in search_path
if not os.path.exists(full_import_path):
_full_import_path = os.path.join(search_path, import_str)
# if found, set as full_import_path
if os.path.exists(_full_import_path):
full_import_path = _full_import_path
logger.debug("import_str: %s found in search_path: %s",
import_str, search_path)

normalised_path = os.path.normpath(full_import_path)

logger.debug("cwd:%s import_str:%s basename:%s -> norm:%s",
cwd, import_str, basename, normalised_path)

return normalised_path, open(normalised_path).read()

def inventory(search_path, target, inventory_path="inventory/"):
"""
Reads inventory (set by inventory_path) in search_path.
set nodes_uri to change reclass nodes_uri the default value
Returns a dictionary with the inventory for target
"""
full_inv_path = os.path.join(search_path, inventory_path)

inv_target = inventory_reclass(full_inv_path)["nodes"][target]

return inv_target


@memoize
def inventory_reclass(inventory_path):
"""
Runs a reclass inventory in inventory_path
(same output as running ./reclass.py -b streams/ --inventory)
Will attempt to read reclass config from 'reclass-config.yml' otherwise
it will failback to the default config.
Returns a reclass style dictionary
"""

reclass_config = {'storage_type': 'yaml_fs',
'inventory_base_uri': inventory_path,
'nodes_uri': os.path.join(inventory_path, 'targets'),
'classes_uri': os.path.join(inventory_path, 'classes')
}

try:
cfg_file = os.path.join(inventory_path, 'reclass-config.yml')
with open(cfg_file) as reclass_cfg:
reclass_config = yaml.safe_load(reclass_cfg)
# normalise relative nodes_uri and classes_uri paths
for uri in ('nodes_uri', 'classes_uri'):
uri_val = reclass_config.get(uri)
uri_path = os.path.join(inventory_path, uri_val)
normalised_path = os.path.normpath(uri_path)
reclass_config.update({uri: normalised_path})
logger.debug("Using reclass inventory config at: %s", cfg_file)
except IOError as ex:
# If file does not exist, ignore
if ex.errno == errno.ENOENT:
logger.debug("Using reclass inventory config defaults")

storage = reclass.get_storage(reclass_config['storage_type'], reclass_config['nodes_uri'],
reclass_config['classes_uri'], default_environment='base')
class_mappings = reclass_config.get('class_mappings') # this defaults to None (disabled)
_reclass = reclass.core.Core(storage, class_mappings)

inv = _reclass.inventory()

logger.debug("reclass inventory: %s", inv)
return inv
180 changes: 180 additions & 0 deletions kapitan/targets.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"kapitan targets"

import logging
import os
import errno
import json
import re
import shutil
import jsonschema
import yaml

from kapitan.resources import search_imports, resource_callbacks, inventory
from kapitan.utils import jsonnet_file, jsonnet_prune, render_jinja2_dir, PrettyDumper

logger = logging.getLogger(__name__)

def compile_target_file(target_file, search_path, output_path, **kwargs):
"""
Loads target file, compiles file (by scanning search_path)
and writes to output_path
"""
target_obj = load_target(target_file)
target_name = target_obj["vars"]["target"]
compile_obj = target_obj["compile"]
ext_vars = target_obj["vars"]

for obj in compile_obj:
if obj["type"] == "jsonnet":
compile_file_sp = os.path.join(search_path, obj["path"])
if os.path.exists(compile_file_sp):
_output_path = os.path.join(output_path, target_name, obj["name"])
update_output_path_dirs(_output_path)
logger.debug("Compiling %s", compile_file_sp)
compile_jsonnet(compile_file_sp, _output_path, search_path,
ext_vars, output=obj["output"], **kwargs)
else:
raise IOError("Path not found in search_path: %s" % obj["path"])

if obj["type"] == "jinja2":
compile_path_sp = os.path.join(search_path, obj["path"])
if os.path.exists(compile_path_sp):
_output_path = os.path.join(output_path, target_name, obj["name"])
update_output_path_dirs(_output_path)
# copy ext_vars to dedicated jinja2 context so we can update it
ctx = ext_vars.copy()
ctx["inventory"] = inventory(search_path, target_name)
compile_jinja2(compile_path_sp, ctx, _output_path)
else:
raise IOError("Path not found in search_path: %s" % obj["path"])

def compile_jinja2(path, context, output_path):
"""
Write items in path as jinja2 rendered files to output_path.
"""
for item_key, item_value in render_jinja2_dir(path, context).iteritems():
full_item_path = os.path.join(output_path, item_key)
try:
os.makedirs(os.path.dirname(full_item_path))
except OSError as ex:
# If directory exists, pass
if ex.errno == errno.EEXIST:
pass
with open(full_item_path, 'w') as fp:
fp.write(item_value["content"])
mode = item_value["mode"]
os.chmod(full_item_path, mode)
logger.info("Wrote %s with mode %.4o", full_item_path, mode)

def compile_jsonnet(file_path, output_path, search_path, ext_vars, **kwargs):
"""
Write file_path (jsonnet evaluated) items as files to output_path.
Set output to write as json or yaml
search_path and ext_vars will be passed as paramenters to jsonnet_file()
kwargs:
output: default 'yaml', accepts 'json'
prune: default True, accepts False
"""
_search_imports = lambda cwd, imp: search_imports(cwd, imp, search_path)
json_output = jsonnet_file(file_path, import_callback=_search_imports,
native_callbacks=resource_callbacks(search_path),
ext_vars=ext_vars)

output = kwargs.get('output', 'yaml')
prune = kwargs.get('prune', True)

if prune:
json_output = jsonnet_prune(json_output)
logger.debug("Pruned output")
for item_key, item_value in json.loads(json_output).iteritems():
# write each item to disk
if output == 'json':
file_path = os.path.join(output_path, '%s.%s' % (item_key, output))
with open(file_path, 'w') as fp:
json.dump(item_value, fp, indent=4, sort_keys=True)
logger.info("Wrote %s", file_path)
elif output == 'yaml':
file_path = os.path.join(output_path, '%s.%s' % (item_key, "yml"))
with open(file_path, 'w') as fp:
yaml.dump(item_value, stream=fp, Dumper=PrettyDumper, default_flow_style=False)
logger.info("Wrote %s", file_path)
else:
raise ValueError('output is neither "json" or "yaml"')

def update_output_path_dirs(output_path):
"""
Attempt to re/create (nested) directories
"""
try:
os.makedirs(output_path)
except OSError as ex:
# If directory exists, remove and recreate
if ex.errno == errno.EEXIST:
shutil.rmtree(output_path)
logger.debug("Deleted %s", output_path)
os.makedirs(output_path)
logger.debug("Re-created %s", output_path)


def load_target(target_file):
"""
Loads and validates a target_file name
Format of the target file is determined by its extention (.json, .yml, .yaml)
Returns a dict object if target is valid
Otherwise raises ValidationError
"""
schema = {
"type": "object",
"properties": {
"version": {"type": "number"},
"vars": {"type": "object"},
"compile": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"type": {"type": "string"},
"path": {"type": "string"},
"output": {"type": "string"},
},
"required": ["type", "name"],
"minItems": 1,
}
},
},
"required": ["version", "compile"],
}

bname = os.path.basename(target_file)

if re.match(r".+\.json$", bname):
with open(target_file) as fp:
target_obj = json.load(fp)
jsonschema.validate(target_obj, schema)
logger.debug("Target file %s is valid", target_file)

return target_obj
if re.match(r".+\.(yaml|yml)$", bname):
with open(target_file) as fp:
target_obj = yaml.safe_load(fp)
jsonschema.validate(target_obj, schema)
logger.debug("Target file %s is valid", target_file)

return target_obj
169 changes: 169 additions & 0 deletions kapitan/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"random utils"
import functools
from hashlib import sha256
import logging
import os
import stat
import jinja2
import _jsonnet as jsonnet
import collections
import yaml


logger = logging.getLogger(__name__)

def normalise_join_path(dirname, path):
"Join dirname with path and return in normalised form"
logger.debug(os.path.normpath(os.path.join(dirname, path)))
return os.path.normpath(os.path.join(dirname, path))


def render_jinja2_template(content, context):
"Render jinja2 content with context"
return jinja2.Template(content, undefined=jinja2.StrictUndefined).render(context)

def jinja2_sha256_hex_filter(string):
"Returns hex digest for string"
return sha256(string).hexdigest()

def render_jinja2_file(name, context):
"Render jinja2 file name with context"
path, filename = os.path.split(name)
env = jinja2.Environment(
undefined=jinja2.StrictUndefined,
loader=jinja2.FileSystemLoader(path or './'),
trim_blocks=True,
lstrip_blocks=True,
)
env.filters['sha256'] = jinja2_sha256_hex_filter
return env.get_template(filename).render(context)


def render_jinja2_dir(path, context):
"""
Render files in path with context
Returns a dict where the is key is the filename (with subpath)
and value is a dict with content and mode
Empty paths will not be rendered
"""
rendered = {}
for root, _, files in os.walk(path):
for f in files:
render_path = os.path.join(root, f)
logger.debug("render_jinja2_dir rendering %s with context %s",
render_path, context)
# get subpath and filename, strip any leading/trailing /
name = render_path[len(os.path.commonprefix([root, path])):].strip('/')
rendered[name] = {"content": render_jinja2_file(render_path, context),
"mode": file_mode(render_path)
}
return rendered


def file_mode(name):
"Returns mode for file name"
st = os.stat(name)
return stat.S_IMODE(st.st_mode)

def jsonnet_file(file_path, **kwargs):
"""
Evaluate file_path jsonnet file.
kwargs are documented in http://jsonnet.org/implementation/bindings.html
"""
return jsonnet.evaluate_file(file_path, **kwargs)

def jsonnet_prune(jsonnet_str):
"Returns a pruned jsonnet_str"
return jsonnet.evaluate_snippet("snippet", "std.prune(%s)" % jsonnet_str)

def memoize(obj):
"""
Decorator that will cache a function's return value should it be called
with the same arguments.
"""
cache = obj.cache = {}

@functools.wraps(obj)
def memoizer(*args, **kwargs):
"checks if args are memoizible"
if args not in cache:
cache[args] = obj(*args, **kwargs)
return cache[args]
return memoizer


class PrettyDumper(yaml.SafeDumper):
'''
Increases indent of nested lists.
By default, they are indendented at the same level as the key on the previous line
More info on https://stackoverflow.com/questions/25108581/python-yaml-dump-bad-indentation
'''
def increase_indent(self, flow=False, indentless=False):
return super(PrettyDumper, self).increase_indent(flow, False)


def flatten_dict(d, parent_key='', sep='.'):
'''
Flatten nested elements in a dictionary
'''
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten_dict(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)


def searchvar(flat_var, inventory_path):
'''
show all inventory files where a given reclass variable is declared
'''

def deep_get(x, keys):
if type(x) is dict:
try:
return deep_get(x[keys[0]], keys[1:])
except (IndexError, KeyError):
pass
else:
if len(keys) == 0:
return x
else:
return None

output = []
maxlenght = 0
keys = flat_var.split(".")
for root, dirs, files in os.walk(inventory_path):
for file in files:
if file.endswith(".yml"):
filename = os.path.join(root, file)
fd = open(filename, 'r')
data = yaml.safe_load(fd)
value = deep_get(data, keys)
if value is not None:
output.append((filename, value))
if len(filename) > maxlenght:
maxlenght = len(filename)
fd.close()
for i in output:
print('{0!s:{l}} {1!s}'.format(*i, l=maxlenght + 2))

5 changes: 5 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
jsonnet==0.9.4
PyYAML==3.12
Jinja2==2.9.4
jsonschema==2.5.1
reclass==1.4.1
69 changes: 69 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Kapitan setup.py for PIP install
"""

from setuptools import setup, find_packages
from codecs import open
from os import path

here = path.abspath(path.dirname(__file__))

with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()

setup(
name='kapitan',
version='0.9.14',

description='A generic jsonnet based configuration deployment system',
long_description=long_description,

author='Ricardo Amaro',
author_email='ramaro@google.com',
license='MIT',

classifiers=[
'Development Status :: 3 - Alpha',

'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',

'License :: OSI Approved :: MIT License',

'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
],

keywords='jsonnet kubernetes reclass jinja vault',
py_modules=["kapitan"],
packages=find_packages(),
install_requires=[
'jsonnet>=0.9.4',
'PyYAML>=3.12',
'Jinja2>=2.9.4',
'reclass>=1.4.1',
'jsonschema>=2.5.1'
],

entry_points={
'console_scripts': [
'kapitan=kapitan.cli:main',
],
},
)
16 changes: 16 additions & 0 deletions tests/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

29 changes: 29 additions & 0 deletions tests/test_jinja2_filters.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/usr/bin/python
#
# Copyright 2017 The Kapitan Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"tests"
import unittest
import tempfile
from kapitan.utils import render_jinja2_file

class Jinja2FiltersTest(unittest.TestCase):
def test_sha256(self):
with tempfile.NamedTemporaryFile() as f:
f.write("{{text|sha256}}")
f.seek(0)
context = {"text":"this and that"}
digest = 'e863c1ac42619a2b429a08775a6acd89ff4c2c6b8dae12e3461a5fa63b2f92f5'
self.assertEqual(render_jinja2_file(f.name, context), digest)

0 comments on commit b9a7492

Please sign in to comment.