Skip to content

Coding Best Practices

James Brown edited this page Apr 28, 2022 · 1 revision

Note: this content is outdated and needs a review.

Introduction

Modified from https://github.com/openshift/openshift-ansible/blob/master/docs/best_practices_guide.adoc

Quotes denote a Rule/Best Practice

The purpose of this guide is to describe the preferred patterns and best practices used in this repository (both in Ansible and Python).

It is important to note that this repository may not currently comply with all best practices, but the intention is that it will.

All new pull requests created against this repository MUST comply with this guide.

This guide complies with RFC2119.

Python

Method Signatures

When adding a new parameter to an existing method, a default value SHOULD be used

The purpose of this rule is to make it so that method signatures are backwards compatible.

If this rule isn’t followed, it will be necessary for the person who changed the method to search out all callers and make sure that they’re able to use the new method signature.

Before: def add_person(first_name, last_name):

After: def add_person(first_name, last_name, age=None):

PyLint

PyLint is used via MIR SWAMP in an attempt to keep the Python code as clean and as manageable as possible.

PyLint rules MUST NOT be disabled on a whole file.

Instead, disable the PyLint check on the line where PyLint is complaining.

PyLint rules MUST NOT be disabled unless they meet one of the following exceptions

Exceptions:

When PyLint fails because of a dependency that can’t be installed on the build bot

When PyLint fails because of including a module that is outside of control (like Ansible)

When PyLint fails, but the code makes more sense the way it is formatted (stylistic exception). For this exception, the description of the PyLint disable MUST state why the code is more clear, AND the person reviewing the PR will decide if they agree or not. The reviewer may reject the PR if they disagree with the reason for the disable.

All PyLint rule disables MUST be documented in the code. The purpose of this rule is to inform future developers about the disable.

Specifically, the following MUST accompany every PyLint disable: Why is the check being disabled?

Is disabling this check meant to be permanent or temporary?

Example:

# Reason: disable pylint maybe-no-member because overloaded use of
#     the module name causes pylint to not detect that 'results'
#     is an array or hash
# Status: permanently disabled unless a way is found to fix this.
# pylint: disable=maybe-no-member

metadata[line] = results.pop()

Ansible

Yaml Files (Playbooks, Roles, Vars, etc)

Ansible files SHOULD NOT use JSON (use pure YAML instead). YAML is a superset of JSON, which means that Ansible allows JSON syntax to be interspersed. Even though YAML (and by extension Ansible) allows for this, JSON SHOULD NOT be used.

Rationale:

  • Ansible is able to give clearer error messages when the files are pure YAML

  • YAML reads nicer (preference held by several team members)

  • YAML makes for nicer diffs as YAML tends to be multi-line, whereas JSON tends to be more concise

Exceptions:

Ansible static inventory files are INI files. To pass in variables for specific hosts, Ansible allows for these variables to be put inside of the static inventory files. These variables can be in JSON format, but can’t be in YAML format. This is an acceptable use of JSON, as YAML is not allowed in this case.

Every effort should be made to keep our Ansible YAML files in pure YAML.

Modules

Custom Ansible modules SHOULD only be embedded in the role(s) for which their used. Global modules SHOULD NEVER be used.

The purpose of this rule is to make it easy to include custom modules in our playbooks, so they can more easily be shared and reused on Ansible Galaxy.

- hosts: hosts
  gather_facts: yes
  roles:
  - role: example
  post_tasks:
  - custom_module
      role: common
      hostname: host
      public_hostname: host.example.com

Parameters to Ansible modules SHOULD use the Yaml dictionary format when 3 or more parameters are being passed

When a module has several parameters that are being passed in, it’s hard to see exactly what value each parameter is getting. It is preferred to use the Ansible Yaml syntax to pass in parameters so that it’s more clear what values are being passed for each parameter.

Bad:
- file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link
- 
Good:
- file:
    src: /file/to/link/to
    dest: /path/to/symlink
    owner: foo
    group: foo
    state: link

Parameters to Ansible modules SHOULD use the Yaml dictionary format when the line length exceeds 120 characters

Lines that are long quickly become a wall of text that isn’t easily parsable. It is preferred to use the Ansible Yaml syntax to pass in parameters so that it’s more clear what values are being passed for each parameter.

Bad:
- get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf sha256sum=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
Good:
- get_url:
    url: http://example.com/path/file.conf
    dest: /etc/foo.conf
    sha256sum: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c

The Ansible command module SHOULD be used instead of the Ansible shell module.

Ansible doc on why using the command module is a best practice

The Ansible shell module can run most commands that can be run from a bash CLI. This makes it extremely powerful, but it also opens our playbooks up to being exploited by attackers.

Bad:
- shell: "/bin/echo {{ cli_var }}"

Better:
- command: "/bin/echo {{ cli_var }}"

The Ansible quote filter MUST be used with any variable passed into the shell module.

See Ansible doc describing why to use the quote filter

It is recommended not to use the shell module. However, if it absolutely must be used, all variables passed into the shell module MUST use the quote filter to ensure they are shell safe.

Bad:
- shell: "/bin/echo {{ cli_var }}"

Good:
- shell: "/bin/echo {{ cli_var | quote }}"

Defensive Programming

See Ansible Fail Module

Ansible playbooks MUST begin with checks for any variables that they require.

If an Ansible playbook requires certain variables to be set, it’s best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the playbook.

Example:

- hosts: localhost
  gather_facts: no
  tasks:
  - fail: msg="This playbook requires g_environment to be set and non empty"
    when: g_environment is not defined or g_environment == ''

Ansible roles tasks/main.yml file MUST begin with checks for any variables that they require.

If an Ansible role requires certain variables to be set, it’s best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the role.

Example:

# tasks/main.yml
- fail: msg="This role requires arl_environment to be set and non empty"
  when: arl_environment is not defined or arl_environment == ''

Tasks

Ansible tasks SHOULD NOT be used in Ansible playbooks. Instead, use pre_tasks and post_tasks.

An Ansible play is defined as a Yaml dictionary. Because of that, Ansible doesn’t know if the play’s tasks list or roles list was specified first. Therefore Ansible always runs tasks after roles.

This can be quite confusing if the tasks list is defined in the playbook before the roles list because people assume in order execution in Ansible.

Therefore, we SHOULD use pre_tasks and post_tasks to make it more clear when the tasks will be run.

See Ansible documentation on pre_tasks and post_tasks

Bad:
# playbook.yml
- hosts: localhost
  gather_facts: no
  tasks:
  - name: This will execute AFTER the example_role, so it's confusing
    debug: msg="in tasks list"
  roles:
  - role: example_role

# roles/example_role/tasks/main.yml
- debug: msg="in example_role"

Good:
---
# playbook.yml
- hosts: localhost
  gather_facts: no
  pre_tasks:
  - name: This will execute BEFORE the example_role, so it makes sense
    debug: msg="in pre_tasks list"
  roles:
  - role: example_role

# roles/example_role/tasks/main.yml
- debug: msg="in example_role"

Roles

Role Variables

  • group_vars/all - Contains variable definitions that apply to all roles.
  • "common" role - Contains variables and tasks that apply to all roles.
  • Roles variables - Variables specific to a role should be defined in /vars/main.yml. All variables should be prefixed with the role name.
  • Variables that are environment specific and that need to be overridden should be in all caps.

Role naming conventions

  • Role names - Terse, lowercase, one word if possible
  • Role task names - Terse, descriptive, spaces are OK
  • Role handlers - Terse, descriptive, spaces are OK

All tasks in a role SHOULD be tagged with the role name. Context

See Ansible doc explaining tags

Ansible tasks can be tagged, and then these tags can be used to either run or skip the tagged tasks using the --tags and --skip-tags ansible-playbook options respectively.

This is very useful when developing and debugging new tasks. It can also significantly speed up playbook runs if the user specifies only the roles that changed.

Example:

# roles/example_role/tasks/main.yml
- debug: msg="in example_role"
  tags:
  - example_role

The Ansible roles directory MUST maintain a flat structure.

See Ansible Suggested Directory Layout

The purpose of this rule is to:

  • Comply with the upstream best practices
  • Make it familiar for new contributors
  • Make it compatible with Ansible Galaxy

Ansible Roles SHOULD be named like technology_component[_subcomponent].

For consistency, role names SHOULD follow the above naming pattern. It is important to note that this is a recommendation for role naming, and follows the pattern used by upstream.

Many times the technology portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used.

Examples: The role to configure a master is called openshift_master

The role to configure OpenShift specific yum repositories is called openshift_repos

Filters

See Ansible Playbook Filters See Jinja2 Builtin Filters

The default filter SHOULD replace empty strings, lists, etc.

When using the jinja2 default filter, unless the variable is a boolean, specify true as the second parameter. This will cause the default filter to replace empty strings, lists, etc with the provided default.

This is because it is preferable to either have a sane default set than to have an empty string, list, etc. For example, it is preferable to have a config value set to a sane default than to have it simply set as an empty string.

From the Jinja2 Docs:

If you want to use default with variables that evaluate to false you have to set the second parameter to true

Example:
---
- hosts: localhost
  gather_facts: no
  vars:
    somevar: ''
  tasks:
  - debug: var=somevar

  - name: "Will output 'somevar: []'"
    debug: "msg='somevar: [{{ somevar | default('the string was empty') }}]'"

  - name: "Will output 'somevar: [the string was empty]'"
    debug: "msg='somevar: [{{ somevar | default('the string was empty', true) }}]'"

In other words, normally the default filter will only replace the value if it’s undefined. By setting the second parameter to true, it will also replace the value if it defaults to a false value in Python, so None, empty list, empty string, etc.

This is almost always more desirable than an empty list, string, etc.

Package Management

Package installation MUST use Ansible package module to abstract away apt-get/yum. Using os_family fact to determine which

The Ansible package module calls the associated package manager for the underlying OS. This allows for the support of multiple OS(s) within a single role.

See Ansible package module

Bad:
---
# tasks.yml
- name: Install etcd (for etcdctl)
  yum: name=etcd state=latest
  when: ansible_pkg_mgr == yum
  register: install_result

- name: Install etcd (for etcdctl)
  dnf: name=etcd state=latest
  when: ansible_pkg_mgr == apt
  register: install_result
  
Good:
---
# tasks.yml
- name: Install etcd (for etcdctl)
  package: name=etcd state=latest
  register: install_result

Ansible Coding Conventions

General

  • YAML files - All yaml files should use 2 space indents and end with .yaml
  • Variables - Use jinja variable syntax over deprecated variable syntax. {{ var }} not $var
  • Use spaces around jinja variable names. {{ var }} not {{var}}
  • Variables that are environment specific and that need to be overridden should be in ALL CAPS.
  • Variables that are internal to the role should be lowercase.
  • Prefix all variables defined in a role with the name of the role. Example: EDXAPP_FOO
  • Keep roles self contained - Roles should avoid including tasks from other roles when possible
  • Plays should do nothing more than include a list of roles except where pre_tasks and post_tasks are required (to manage a load balancer for example)
  • Tests - The .gitignore should ignore *test* to allow for local testing without polluting the repository
  • Handlers - Any service requiring to restarted should be done via handlers. In other words, no hardcoded reboots of any service to include the host itself.

Conditionals and return status

  • Always use when: for conditionals - To check if a variable is defined when: my_var is defined or when: my_var is not defined
  • To learn more (see conditional execution)

Replacing text

When using replace ensure proper regex is applied When using lineinfile always copy the entirety of the line When using blockinfile use a jinja2 template

Formatting

Break long lines using yaml line continuation

  - debug: >
      msg={{ test }}
  - debug:
      msg: "{{ test }}"

Secure vs. Insecure data

As a general policy we want to protect the following data:

  • Usernames (should be avoided in public repos)
  • Public keys (keys are OK to be public)
  • Hostnames (should never be hardcoded in play)
  • Passwords, API keys (should never be hardcoded in a play)

Application development

Use symlinks to link configuration files

When creating configuration files outside of the application install directory, use symlinks to point to the configuration files within the application directory. That way when we update the symlink to a new/old version of the application, the configuration files will be updated too via following the symlinks.

Clone this wiki locally