Skip to content

Commit

Permalink
Fix some typos and improve readability
Browse files Browse the repository at this point in the history
  • Loading branch information
gojeaqui authored and nocturnalastro committed Nov 8, 2023
1 parent 1eee192 commit 2898bc8
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 28 deletions.
54 changes: 27 additions & 27 deletions docs/inventory.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,25 +34,25 @@ Single Node OpenShift requires the API and Ingress VIPs to be set to the IP addr

In addition to that, the following checks must be met for both HA and SNO deployments:

- every node has required vars:
- Every node has required vars:
- `bmc_address`
- `bmc_password`
- `bmc_user`
- `vendor`
- `role`
- `mac`
- required vars are correctly typed
- all values of `vendor` are supported
- all values of `role` are supported
- Required vars are correctly typed
- All values of `vendor` are supported
- All values of `role` are supported
- If any nodes are virtual (vendor = KVM) then a vm_host is defined

There three possible groups of nodes are `masters`, `workers` and `day2_workers`.

#### Day 2 nodes

Day 2 nodes are added to an existing cluster. The reason why the installation of day 2 nodes is built into the main path of our automation is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.
Day 2 nodes are added to an existing cluster. The reason why the installation of day 2 nodes is built into the main path of our automation, is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.

Add a second iso name parameter to the inventory to avoid conflict with the original:
Add a second ISO name parameter to the inventory to avoid conflict with the original:

```yaml
# day2 workers require custom parameter
Expand Down Expand Up @@ -137,11 +137,11 @@ See the sample inventory file (`inventory.yml.sample`) and the sample inventory
### Network configuration

The `network_config` entry on a node is a simplified version of the `nmstate`([nmstate.io](http://nmstate.io/)) required by the [assisted installer api](https://github.com/openshift/assisted-service/blob/3bcaca8abef5173b0e2175b5d0b722e851e39cee/docs/user-guide/restful-api-guide.md).
If you wish to use your own template you can set `network_config.template` with a path to your desired template the default can be found [here](../roles/generate_discovery_iso/templates/nmstate.yml.j2). If you wish to write the `nmstate` by hand you can use the `network_config.raw`.
If you wish to use your own template, you can set `network_config.template` with a path to your desired template (the default can be found [here](../roles/generate_discovery_iso/templates/nmstate.yml.j2)). If you wish to write the `nmstate` by hand you can use the `network_config.raw`.

#### Static IPs

To activate static IPs in the discovery iso and resulting cluster there is some configuration required in the inventory.
To activate static IPs in the discovery ISO and resulting cluster there is some configuration required in the inventory.

```yaml
network_config:
Expand Down Expand Up @@ -209,9 +209,9 @@ network_config:

#### IPv6

At the momment crucible doesn't configure DHCP for IPv6 entries, so you will have to roll your own or use static ips (see the above section on network configuration)
At the momment crucible doesn't configure DHCP for IPv6 entries, so you will have to roll your own or use static IPs (see the above section on network configuration)

Note: Crucible doesn't require the BMC's to be on the same network as long as both are routable from the bastion. So you could have as per the example the BMC addresses as IPv4 even if the cluster is IPv6. However it should be noted that the HTTP Store has to be routeable from the BMC network.
Note: Crucible doesn't require the BMC's to be on the same network as long as they are routable from the bastion. So you could have (as per the example) the BMC addresses as IPv4 even if the cluster is IPv6. However it should be noted that the HTTP Store has to be routeable from the BMC network.

To setup an IPv6 single stack cluster you need to change the following variables:
```yaml
Expand Down Expand Up @@ -389,9 +389,9 @@ By default an SSH key will be generated by the `deploy_cluster.yml` playbook. Th

### DNS and DHCP

If you to only point to the dns configure by crucible you should also provide `upstream_dns` which points to another dns server which can provide records for non-crucible queries.
If you only point to the DNS configured by crucible, you should also provide an `upstream_dns` which points to another DNS server which can provide records for non-crucible queries.

You can control the ip addresses which dnsmasq listens to using the `listen_addresses` by default this will include both `127.0.0.1` and ansible's default IPv4 address for the host (`ansible_default_ipv4.address`). You may also configure the interfaces which dnsmasq responds by defining `listening_interfaces` as a list of the interfaces you which to listen to.
You can control the IP addresses which dnsmasq listens to using the `listen_addresses`. By default this will include both `127.0.0.1` and ansible's default IPv4 address for the host (`ansible_default_ipv4.address`). You may also configure the interfaces which dnsmasq responds by defining `listening_interfaces` as a list of the interfaces to which you listen to.

```yaml
dns_host:
Expand All @@ -409,7 +409,7 @@ dns_host:
```

#### DHCP
If you wish to configure dnmasq to act as a dhcp server then you need to configure the following values:
If you wish to configure dnmasq to act as a DHCP server, then you need to configure the following values:

```yaml
dns_host:
Expand All @@ -422,7 +422,7 @@ dns_host:
...
```

In addition if you do not want dnsmasq to reply to dhcp request on certain interfaces you can define the list `no_dhcp_interfaces` so that dnsmasq will ignore them. For instance assuming you have 3 interfaces `eth1`, `eth2`, `eth3`, and you only wish for dhcp to listen on `eth2` you could add the following:
In addition if you do not want dnsmasq to reply to DHCP requests on certain interfaces, you can define the list `no_dhcp_interfaces` so that dnsmasq will ignore them. For instance assuming you have 3 interfaces `eth1`, `eth2`, `eth3`, and you only wish for DHCP to listen on `eth2` you could add the following:

```yaml
dns_host:
Expand Down Expand Up @@ -490,7 +490,7 @@ That diagram gives the following excerpt from the inventory for the `bastion` an

### VM Host in Detail

The virtual `master` nodes in their simplest case are defined in the inventory as an address they will be accessible on, and the MAC Address that will be set when creating the VM and later used by Assisted Installer to identify the machines:
The virtual `master` nodes in their simplest case are defined in the inventory as the addresses where they will be accessible, and the MAC Address that will be set when creating the VM and later used by Assisted Installer to identify the machines:

```yaml
masters:
Expand All @@ -510,7 +510,7 @@ The virtual `master` nodes in their simplest case are defined in the inventory a
mac: "DE:AD:BE:EF:C0:2E"
```

For the virtual bridge configuration, in this example interface `eno1` is used for accessing the VM host, the `eno2` is assigned to the virtual bridge to allow the virtual `super` nodes to connect to the Management Network. Note that these two interfaces cannot be the same. DNS on the virtual bridge is provided by the DNS `service` configured on the Bastion host.
For the virtual bridge configuration, in this example the interface `eno1` is used for accessing the VM host. The interface `eno2` is assigned to the virtual bridge to allow the virtual `super` nodes to connect to the Management Network. Note that these two interfaces cannot be the same. On the virtual bridge, DNS is provided by the DNS `service` configured on the Bastion host.

The `vm_host` entry in the inventory becomes:

Expand All @@ -527,13 +527,13 @@ The `vm_host` entry in the inventory becomes:

### Resulting Cluster

Combining those pieces, along with other configuration like versions, certificates and keys, will allow Crucible to deploy a cluster like this:
Combining those pieces, along with other configurations like: versions, certificates, keys, and so on, will allow Crucible to deploy a cluster like this:

![](images/simple_kvm.png)

## Bare Metal Deployment

At the other extreme to the previous example, services and nodes can be spread across multiple different machines, and a cluster with worker nodes can be deployed:
At the other extreme of the previous example, services and nodes can be spread across multiple different machines, and a cluster with worker nodes can be deployed:

![](images/many_machines.png)

Expand Down Expand Up @@ -597,7 +597,7 @@ The basic network configuration of the inventory for the fully bare metal deploy
```
## Additional Partition Deployment

For OCP 4.8+ deployments you can set partitions if required on the nodes. You do this by adding the snippet below to the node defination. Please ensure you provide the correct label and size(MiB) for the additional partitions you want to create. The device can either be the drive in which RHCOS image needs to be installed or it can be any additional drive on the node that requires partitioning. In the case that the device is equal to the host's `installation_disk_path` then a partition will be added defined by `disks_rhcos_root`. All additional partitions must be added under `extra_partitions` key as per the example below.
For OCP 4.8+ deployments you can set partitions if required on the nodes. You do this by adding the snippet below to the node definition. Please ensure you provide the correct label and size(MiB) for the additional partitions you want to create. The device can either be the drive in which RHCOS image needs to be installed or it can be any additional drive on the node that requires partitioning. In the case that the device is equal to the host's `installation_disk_path` then a partition will be added defined by `disks_rhcos_root`. All additional partitions must be added under `extra_partitions` key as per the example below.

```yaml
disks:
Expand Down Expand Up @@ -660,8 +660,8 @@ These two examples are not the only type of clusters that can be deployed using

# Mirroring operators and index for disconnected installations

By default we do not populate the disconnected registry with operators used post install
this is because this takes a substantial amount of time and can be done post install or
By default we do not populate the disconnected registry with operators used post install.
This is because this takes a substantial amount of time and can be done post install or
even in parallel by the user by running:

```bash
Expand All @@ -674,8 +674,8 @@ If you wish to populate the registry as part of deploying the pre-requistes you

## DNS Entries for Bastion, Services and VM_Hosts.

When using the crucible provided DNS, the automation will create entries for the bastion, the service hosts and, then vm hosts.
The value of `ansible_fqdn` will be used except in where `registry_fqdn` is defined as part of `registry_host`, or when `sushy_fqdn` is defined as part of `vm_hosts`.
When using the crucible provided DNS, the automation will create entries for the bastion, the service hosts and then, VM hosts.
The value of `ansible_fqdn` will be used except where `registry_fqdn` is defined as part of `registry_host`, or when `sushy_fqdn` is defined as part of `vm_hosts`.

NOTE: The DNS entries will only be created if the `ansible_host` is an _IP address_ otherwise it will be skipped.

Expand All @@ -684,9 +684,9 @@ To force the automation to skip a host you can add `dns_skip_record: true` to th
## DNS Entries for BMCs

Automatic creation of DNS records for your BMC nodes requires `setup_dns_service: true`. Crucible will create DNS A and PTR records.
For this to occur you you are required to add `bmc_ip:` alongside `ansible_host` in your host definitions.
For this to occur, you are required to add `bmc_ip:` alongside `ansible_host` in your host definitions.
The addresses will be templated as `{{ inventory_hostname }}-bmc.infra.{{ base_dns_domain }}`.
If `setup_dns_service` is `false` crucible will not create any DNS records.
If `setup_dns_service` is `false`, crucible will not create any DNS records.

For example: The BMC address for host `super1` will be `"super1-bmc.infra.example.com"`.

Expand Down Expand Up @@ -729,10 +729,10 @@ all:
```
# Defining a password for the discovery iso.

If users wish to provide password for the discovery iso they must define `hashed_discovery_password` in the `all` section inventory.
If users wish to provide password for the discovery ISO, they must define `hashed_discovery_password` in the `all` section inventory.
The value provided in `hashed_discovery_password` can be created by using `mkpasswd --method=SHA-512 MyAwesomePassword`.


# Operators

It is possible to the install a few operators as part of the cluster installtion. These operators are local storage operator (`install_lso: True`), open data fabric (`install_odf: True`) and openshift virtualision (`install_cnv: True`)
It is possible to install a few operators as part of the cluster installtion. These operators are Local Storage Operator (`install_lso: True`), Open Data Fabric (`install_odf: True`) and Openshift Virtualization (`install_cnv: True`)
2 changes: 1 addition & 1 deletion playbooks/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Playbooks

Most playbooks here are called by the higher level playbooks (`deploy_cluster.yml`, `deploy_day2_workers.yml` and `deploy_prerequisites.yml`)
and are nominally put in the order of usage with some exceptions. This may be useful for debugging or stepping through the process however you should not where variables are overridden in the higher level playbooks for achieve the same results (e,g when using `generate_discovery_iso.yml` for a day2 cluster.
and are nominally put in the order of usage with some exceptions. This may be useful for debugging or stepping through the process however you should not know where variables are overridden in the higher level playbooks to achieve the same results (e,g when using `generate_discovery_iso.yml` for a day2 cluster.

| Playbook name | Discription | Required arguments |
| -------------------------------------- | -------------------------------------------------------------------------------- | --------------------- |
Expand Down

0 comments on commit 2898bc8

Please sign in to comment.