Skip to content

Commit

Permalink
add pre-commit hook to facilitate local testing (#9158)
Browse files Browse the repository at this point in the history
* add pre-commit hook configuration

* add tmp.md to .gitignore

* describe the use of pre-commit hook in CONTRIBUTING.md

* fix docs/integration.md errors identified by markdownlint

* fix docs/<file>.md errors identified by markdownlint

* docs/azure-csi.md
* docs/azure.md
* docs/bootstrap-os.md
* docs/calico.md
* docs/debian.md
* docs/fcos.md
* docs/vagrant.md
* docs/gcp-lb.md
* docs/kubernetes-apps/registry.md
* docs/setting-up-your-first-cluster.md
* docs/vagrant.md
* docs/vars.md

* fix contrib/<file>.md errors identified by markdownlint
  • Loading branch information
cristicalin authored and alegrey91 committed Aug 29, 2022
1 parent b410afe commit bcbe249
Show file tree
Hide file tree
Showing 20 changed files with 268 additions and 135 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -112,3 +112,4 @@ roles/**/molecule/**/__pycache__/

# Temp location used by our scripts
scripts/tmp/
tmp.md
1 change: 1 addition & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
---
MD013: false
MD029: false
48 changes: 48 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
repos:
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.27.1
hooks:
- id: yamllint
args: [--strict]

- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: [ -r, "~MD013,~MD029" ]
exclude: "^.git"

- repo: local
hooks:
- id: ansible-lint
name: ansible-lint
entry: ansible-lint -v
language: python
pass_filenames: false
additional_dependencies:
- .[community]

- id: ansible-syntax-check
name: ansible-syntax-check
entry: env ANSIBLE_INVENTORY=inventory/local-tests.cfg ANSIBLE_REMOTE_USER=root ANSIBLE_BECOME="true" ANSIBLE_BECOME_USER=root ANSIBLE_VERBOSITY="3" ansible-playbook --syntax-check
language: python
files: "^cluster.yml|^upgrade-cluster.yml|^reset.yml|^extra_playbooks/upgrade-only-k8s.yml"

- id: tox-inventory-builder
name: tox-inventory-builder
entry: bash -c "cd contrib/inventory_builder && tox"
language: python
pass_filenames: false

- id: check-readme-versions
name: check-readme-versions
entry: tests/scripts/check_readme_versions.sh
language: script
pass_filenames: false

- id: ci-matrix
name: ci-matrix
entry: tests/scripts/md-table/test.sh
language: script
pass_filenames: false
17 changes: 12 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,12 @@ pip install -r tests/requirements.txt

#### Linting

Kubespray uses `yamllint` and `ansible-lint`. To run them locally use `yamllint .` and `ansible-lint`. It is a good idea to add call these tools as part of your pre-commit hook and avoid a lot of back end forth on fixing linting issues (<https://support.gitkraken.com/working-with-repositories/githooksexample/>).
Kubespray uses [pre-commit](https://pre-commit.com) hook configuration to run several linters, please install this tool and use it to run validation tests before submitting a PR.

```ShellSession
pre-commit install
pre-commit run -a # To run pre-commit hook on all files in the repository, even if they were not modified
```

#### Molecule

Expand All @@ -33,7 +38,9 @@ Vagrant with VirtualBox or libvirt driver helps you to quickly spin test cluster
1. Submit an issue describing your proposed change to the repo in question.
2. The [repo owners](OWNERS) will respond to your issue promptly.
3. Fork the desired repo, develop and test your code changes.
4. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
5. Submit a pull request.
6. Work with the reviewers on their suggestions.
7. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
4. Install [pre-commit](https://pre-commit.com) and install it in your development repo.
5. Addess any pre-commit validation failures.
6. Sign the CNCF CLA (<https://git.k8s.io/community/CLA.md#the-contributor-license-agreement>)
7. Submit a pull request.
8. Work with the reviewers on their suggestions.
9. Ensure to rebase to the HEAD of your target branch and squash un-necessary commits (<https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/>) before final merger of your contribution.
12 changes: 9 additions & 3 deletions contrib/network-storage/glusterfs/roles/glusterfs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,16 @@ This role performs basic installation and setup of Gluster, but it does not conf

Available variables are listed below, along with default values (see `defaults/main.yml`):

glusterfs_default_release: ""
```yaml
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).

glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```yaml
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```

For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.

Expand All @@ -29,9 +33,11 @@ None.

## Example Playbook

```yaml
- hosts: server
roles:
- geerlingguy.glusterfs
```

For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).

Expand Down
3 changes: 1 addition & 2 deletions contrib/terraform/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@ terraform apply -var-file=credentials.tfvars
```

- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated ssh-bastion.conf.
Ansible automatically detects bastion and changes ssh_args
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args`

```commandline
ssh -F ./ssh-bastion.conf user@$ip
Expand Down
4 changes: 1 addition & 3 deletions contrib/terraform/exoscale/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,7 @@ The setup looks like following

## Requirements

* Terraform 0.13.0 or newer

*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
* Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all `versions.tf` files)

## Quickstart

Expand Down
4 changes: 1 addition & 3 deletions contrib/terraform/vsphere/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,7 @@ This setup assumes that the DHCP is disabled in the vSphere cluster and IP addre

## Requirements

* Terraform 0.13.0 or newer

*0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
* Terraform 0.13.0 or newer (0.12 also works if you modify the provider block to include version and remove all `versions.tf` files)

## Quickstart

Expand Down
15 changes: 12 additions & 3 deletions docs/azure-csi.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,19 +57,28 @@ The name of the network security group your instances are in, can be retrieved v
These will have to be generated first:

- Create an Azure AD Application with:
`az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET`

```ShellSession
az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET
```

Display name, identifier-uri, homepage and the password can be chosen

Note the AppId in the output.

- Create Service principal for the application with:
`az ad sp create --id AppId`

```ShellSession
az ad sp create --id AppId
```

This is the AppId from the last command

- Create the role assignment with:
`az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID`

```ShellSession
az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID
```

azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret.

Expand Down
19 changes: 16 additions & 3 deletions docs/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,14 +71,27 @@ The name of the resource group that contains the route table. Defaults to `azur
These will have to be generated first:

- Create an Azure AD Application with:
`az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`

```ShellSession
az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET
```

display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.

- Create Service principal for the application with:
`az ad sp create --id AppId`

```ShellSession
az ad sp create --id AppId
```

This is the AppId from the last command

- Create the role assignment with:
`az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`

```ShellSession
az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID
```

azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.

Expand Down
12 changes: 7 additions & 5 deletions docs/bootstrap-os.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,13 @@ The `kubespray-defaults` role is expected to be run before this role.

Remember to disable fact gathering since Python might not be present on hosts.

- hosts: all
gather_facts: false # not all hosts might be able to run modules yet
roles:
- kubespray-defaults
- bootstrap-os
```yaml
- hosts: all
gather_facts: false # not all hosts might be able to run modules yet
roles:
- kubespray-defaults
- bootstrap-os
```
## License
Expand Down
3 changes: 1 addition & 2 deletions docs/calico.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,7 @@ You need to edit your inventory and add:
* `calico_rr` group with nodes in it. `calico_rr` can be combined with
`kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
group of `k8s_cluster` group.
* `cluster_id` by route reflector node/group (see details
[here](https://hub.docker.com/r/calico/routereflector/))
* `cluster_id` by route reflector node/group (see details [here](https://hub.docker.com/r/calico/routereflector/))

Here's an example of Kubespray inventory with standalone route reflectors:

Expand Down
37 changes: 21 additions & 16 deletions docs/debian.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,34 +3,39 @@
Debian Jessie installation Notes:

- Add

```GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"```

to /etc/default/grub. Then update with


```ini
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
```

to `/etc/default/grub`. Then update with

```ShellSession
sudo update-grub
sudo update-grub2
sudo reboot
sudo update-grub
sudo update-grub2
sudo reboot
```

- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
```apt-get -t jessie-backports install systemd```

```ShellSession
apt-get -t jessie-backports install systemd
```

(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)

- Add the Ansible repository and install Ansible to get a proper version

```ShellSession
sudo add-apt-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

```

- Install Jinja2 and Python-Netaddr

```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
```ShellSession
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
```

Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
2 changes: 1 addition & 1 deletion docs/fcos.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Prepare ignition and serve via http (a.e. python -m http.server )

### create guest

```shell script
```ShellSeasion
machine_name=myfcos1
ignition_url=http://mywebserver/fcos.ign
Expand Down
10 changes: 7 additions & 3 deletions docs/gcp-lb.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,19 @@

Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer.

This feature is able to deliver by adding parameters to kube-controller-manager and kubelet. You need specify:
This feature is able to deliver by adding parameters to `kube-controller-manager` and `kubelet`. You need specify:

```ShellSession
--cloud-provider=gce
--cloud-config=/etc/kubernetes/cloud-config
```

To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set cloud_provider to gce. So for example, in file group_vars/all/gcp.yml:
To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set `cloud_provider` to `gce`. So for example, in file `group_vars/all/gcp.yml`:

```yaml
cloud_provider: gce
gce_node_tags: k8s-lb
```
When you will setup it and create SVC in Kubernetes with type=LoadBalancer, cloud provider will create public IP and will set firewall.
When you will setup it and create SVC in Kubernetes with `type=LoadBalancer`, cloud provider will create public IP and will set firewall.
Note: Cloud provider run under VM service account, so this account needs to have correct permissions to be able to create all GCP resources.
Loading

0 comments on commit bcbe249

Please sign in to comment.