Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

switch markdownlint container to markdownlint-cli2 #743

Merged
merged 1 commit into from
Jan 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .markdownlint-cli2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Reference: https://github.com/DavidAnson/markdownlint-cli2#markdownlint-cli2yaml

config:
ul-indent:
# Kramdown wanted us to have 3 earlier, tho this CLI recommends 2 or 4
indent: 3

# Don't autofix anything, we're linting here
fix: false
14 changes: 8 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,21 +29,23 @@ Then run
make run-dev-env
```

If the IP address of the newly created virtual machine is not shown, then run the following
If the IP address of the newly created virtual machine is not shown,
then run the following

```
```sh
virsh net-dhcp-leases default
```

ssh into the machine with the metal3ci user

```
```sh
ssh metal3ci@VM_IP
```

When running ```make``` as described below, if you hit an issue about the default network, saying that it is already in use
by ens2, you need to modify the file ```/etc/libvirt/qemu/networks/default.xml```
to change the CIDR to not use the same CIDR as ens2 or any other interface.
When running ```make``` as described below, if you hit an issue about
the default network, saying that it is already in use by ens2, you need
to modify the file ```/etc/libvirt/qemu/networks/default.xml``` to
change the CIDR to not use the same CIDR as ens2 or any other interface.
Then run

```sh
Expand Down
57 changes: 37 additions & 20 deletions ci/images/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,9 @@ A container image is available and contains all the tools to build the images
```

```bash
docker run --rm -it -v "<path to metal3-dev-tool repo>:/data"
-v "<path to ci keys folder>:/data/keys" registry.nordix.org/metal3/image-builder /bin/bash
docker run --rm -it -v "<path to metal3-dev-tool repo>:/data" \
-v "<path to ci keys folder>:/data/keys" \
registry.nordix.org/metal3/image-builder /bin/bash
```

### Calling the scripts
Expand Down Expand Up @@ -86,18 +87,24 @@ The centos building scripts take three arguments :

### Building and testing locally

The scripts mentioned above (gen_*_image.sh) are made for use in our CI pipelines.
As such, they make certain assumptions that may not be ideal when developing and testing things locally (e.g. keypair name and ssh-key path).
The scripts mentioned above (gen_*_image.sh) are made for use in our CI
pipelines. As such, they make certain assumptions that may not be ideal
when developing and testing things locally (e.g. keypair name and
ssh-key path).

For these situations there is another script: `run_local.sh`.
It allows overriding many variables so it should be easy to customize to your needs.

This is how you use it:

0. Create an ssh key using `ssh-keygen` if you don't have one already and add it to openstack: `openstack keypair create --public-key /path/to/key <name>`.
1. Check the comments and variables at the top of `run_local.sh` and determine what you want/need to override.
0. Create an ssh key using `ssh-keygen` if you don't have one already
and add it to openstack: `openstack keypair create --public-key
/path/to/key <name>`.
1. Check the comments and variables at the top of `run_local.sh` and
determine what you want/need to override.
2. Create a file with your custom variables.
3. Get an openstack.rc file with credentials to the cloud you want to build in.
3. Get an openstack.rc file with credentials to the cloud you want to
build in.
4. Source your variables, source the openstack.rc.
5. Run the script: `./run_local.sh <provisioning-script>`

Expand All @@ -122,22 +129,29 @@ export RT_USER=<your-username>
export RT_TOKEN=<your-token>
```

**NOTE:** The script uploads the node images (i.e. those produced by `provision_node_image_ubuntu.sh` and `provision_node_image_centos.sh`) to Artifactory.
Make sure you use an image name that does not conflict with existing images!
This is also a good idea for other images since they will end up in Openstack.
**NOTE:** The script uploads the node images (i.e. those produced by
`provision_node_image_ubuntu.sh` and `provision_node_image_centos.sh`)
to Artifactory. Make sure you use an image name that does not conflict
with existing images! This is also a good idea for other images since
they will end up in Openstack.

Additional configuration options:

```bash
export PACKER_DEBUG_ENABLED="true/false" # if true runs the packer build in interactive debug mode,
# packer stops between build stages
export IMAGE_CLEANUP="true/false" # if true deletes the image from openstack that has an equivalent
# name with the one specifed in the IMAGE_NAME variable, if false doesn't delete any image,
# instead appends a timestamp to the name of each newly built image to avoid collision
# if true runs the packer build in interactive debug mode,
# packer stops between build stages
export PACKER_DEBUG_ENABLED="true/false"
# if true deletes the image from openstack that has an equivalent
# name with the one specifed in the IMAGE_NAME variable, if false doesn't
# delete any image, instead appends a timestamp to the name of each newly
# built image to avoid collision
export IMAGE_CLEANUP="true/false"
```

If the the `./run_local sandbox` command is issued the packer build and the artifactory upload process
won't be executed, rather the user will be presented with an interactive container environment where
the openstack cli can be used to check the openstack resources.
If the the `./run_local sandbox` command is issued the packer build and
the artifactory upload process won't be executed, rather the user will
be presented with an interactive container environment where the
openstack cli can be used to check the openstack resources.

And here is how to use it:

Expand All @@ -161,7 +175,8 @@ export RT_TOKEN=<your-token>
../scripts/image_scripts/upload_node_image_rt.sh "${IMAGE_NAME}"
```

The script will automatically download the image from Openstack and upload it to the `metal3/images/k8s_${KUBERNETES_VERSION}` folder.
The script will automatically download the image from Openstack and
upload it to the `metal3/images/k8s_${KUBERNETES_VERSION}` folder.

#### Clean up images

Expand Down Expand Up @@ -190,7 +205,9 @@ openstack image delete ${IMAGE_NAME}

## Packer image build flow

This describes the flow of configuration and invocation of scripts in order to build an image with `run_local.sh`. This flow matches `provision_metal3_image_ubuntu.sh` provisioning.
This describes the flow of configuration and invocation of scripts in
order to build an image with `run_local.sh`. This flow matches
`provision_metal3_image_ubuntu.sh` provisioning.

Requirements for the build are:

Expand Down
16 changes: 13 additions & 3 deletions ci/scripts/openstack/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Openstack Infrastructure

This folder contain scripts to create/delete/interact with openstack infrastructure for CI and DEV environments.
This folder contain scripts to create/delete/interact with openstack
infrastructure for CI and DEV environments.

## Prerequisites

Expand All @@ -10,7 +11,11 @@ This folder contain scripts to create/delete/interact with openstack infrastruct

## CI Infrastructure

CI Infrastructure contains Router, external network, SSH Keys, Bastion server, base images e.t.c. There are scripts to delete and create complete infra from scratch. Any changes to the bare minimal infrastructure like routers and networks would require deletion of infrastructure and creating again from scratch.
CI Infrastructure contains Router, external network, SSH Keys, Bastion
server, base images e.t.c. There are scripts to delete and create
complete infra from scratch. Any changes to the bare minimal
infrastructure like routers and networks would require deletion of
infrastructure and creating again from scratch.

### Create Infrastructure

Expand All @@ -26,7 +31,12 @@ CI Infrastructure contains Router, external network, SSH Keys, Bastion server, b

### DEV Infrastructure

DEV Infrastructure like CI infra contains basic components like routers, external network and Bastion server e.t.c. Apart from bare minimal infra rest of the things are left to developers. There are scripts to delete and create complete infra from scratch. Any changes to the bare minimal infrastructure like routers and networks would require deletion of infrastructure and creating again from scratch.
DEV Infrastructure like CI infra contains basic components like routers,
external network and Bastion server e.t.c. Apart from bare minimal infra
rest of the things are left to developers. There are scripts to delete
and create complete infra from scratch. Any changes to the bare minimal
infrastructure like routers and networks would require deletion of
infrastructure and creating again from scratch.

Resources which developers can use directly are

Expand Down
2 changes: 1 addition & 1 deletion getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

This walk-through assumes that you have an EST email address set up.
Any question or issue related to the Nordix setup should be addressed to
discuss@lists.nordix.org
[discuss@lists.nordix.org](mailto:discuss@lists.nordix.org)

There is a [Nordix getting started](https://wiki.nordix.org/display/DEV/Getting+Started)

Expand Down
20 changes: 20 additions & 0 deletions hack/markdownlint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/bin/sh
# markdownlint-cli2 has config file(s) named .markdownlint-cli2.yaml in the repo

set -eux

IS_CONTAINER="${IS_CONTAINER:-false}"
CONTAINER_RUNTIME="${CONTAINER_RUNTIME:-podman}"

# all md files, but ignore .github
if [ "${IS_CONTAINER}" != "false" ]; then
markdownlint-cli2 "**/*.md" "#.github"
else
"${CONTAINER_RUNTIME}" run --rm \
--env IS_CONTAINER=TRUE \
--volume "${PWD}:/workdir:ro,z" \
--entrypoint sh \
--workdir /workdir \
docker.io/pipelinecomponents/markdownlint-cli2:0.9.0@sha256:71370df6c967bae548b0bfd0ae313ddf44bfad87da76f88180eff55c6264098c \
/workdir/hack/markdownlint.sh "$@"
fi
23 changes: 12 additions & 11 deletions tools/scalability/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,21 @@

The `setup_vms.sh` does the following:

* Reads the ips of the vms from the list
* clones `metal3-dev-env` in each vm,
* checks out to the scalability git branch
* exports the correct environment variables according to the vm role
* performs `make scalability`. This includes
* typical `metal3-dev-env` `make` operation (excluding 04 script)
* setting up the overlay networks
* waits until all the vms have the ready dev environment
* copies the bmh CRs from all the vms to the master vm
* applies the bmh CRs in master vm's Ephemeral cluster
* edits the applied BMH CRs in place to add image in spec which triggers
- Reads the ips of the vms from the list
- clones `metal3-dev-env` in each vm,
- checks out to the scalability git branch
- exports the correct environment variables according to the vm role
- performs `make scalability`. This includes
- typical `metal3-dev-env` `make` operation (excluding 04 script)
- setting up the overlay networks
- waits until all the vms have the ready dev environment
- copies the bmh CRs from all the vms to the master vm
- applies the bmh CRs in master vm's Ephemeral cluster
- edits the applied BMH CRs in place to add image in spec which triggers
provisioning.

## Important note

The `vm_ip_list.txt` contains the ips of the vms. Note that the `setup_vms.sh`
assumes the first ip in this list would be the ip of the master vm. The rest
should be worker vms and can be in any order.
Loading