diff --git a/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx b/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx index e103325f..df7d50ef 100644 --- a/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx +++ b/public/omni/infrastructure-and-extensions/install-airgapped-omni.mdx @@ -2,422 +2,566 @@ title: Installing Airgapped Omni --- -### Prerequisites +import { omni_release, version, release } from '/snippets/custom-variables.mdx'; -DNS server NTP server TLS certificates Installed on machine running Omni +This document will walk through each component to run the "Sidero stack" in an offline environment which includes the following components. -* genuuid - * Used to generate a unique account ID for Omni. -* Docker - * Used for running the suite of applications -* Wireguard - * Used by Siderolink +* Omni +* [Image Factory](./self-hosted/deploy-image-factory-on-prem) +* Container registry +* Authentication service -### Gathering Dependencies +>When running Talos with Omni you do not need to run additional services such as the [Discovery Service](../../talos/v1.12/configure-your-talos-cluster/system-configuration/discovery) because discovery functionality is built in to Omni. -In this package, we will be installing: +If you already have services such as a container registry, authentication service (SAML or OIDC), or a trusted certificate authority you can skip those sections of the guide. This guide will set up proof-of-concept deployment to get you started. We recommend [talking with the Sidero team](https://www.siderolabs.com/contact/) for production deployments. -* Gitea -* Keycloak -* Omni +Omni is licensed under the [Business Source License](https://github.com/siderolabs/omni/blob/main/LICENSE) and requires a support contract for production use. -To keep everything organized, I am using the following directory structure to store all the dependencies and I will move them to the airgapped network all at once. +## Prerequisites -> **NOTE:** The empty directories will be used for the persistent data volumes when we deploy these apps in Docker. +There are some expectations your environment has the following supporting services for the services to run. These include: -```bash -airgap -├── certs -├── gitea -├── keycloak -├── omni -└── registry -``` +* Networks that can route Talos nodes to the Omni service endpoints +* Basic networking services such as DNS, DHCP, and NTP +* An admin system that can connect to the internet to download assets +* A Linux server (e.g. RHEL, Ubuntu) to run the Sidero stack + +>The Sidero stack can be run on a single server or multiple servers, but we don't recommend running Omni inside of Kubernetes for air gapped environments. -#### Generate Certificates +In addition to these services you'll need the following tools installed on the administrator machine and server. -**TLS Certificates** +* [`talosctl`](../../talos/v1.12/getting-started/talosctl) +* `docker` or `podman` +* [`cfssl`](https://github.com/cloudflare/cfssl) and `cfssljson` +* [`yq`](https://github.com/mikefarah/yq) +* [`crane`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) +* `htpasswd` -This tutorial will involve configuring all of the applications to be accessed via https with signed `.pem` certificates generated with [certbot](https://certbot.eff.org/). There are many methods of configuring TLS certificates and this guide will not cover how to generate your own TLS certificates, but there are many resources available online to help with this subject if you do not have certificates already. +>Podman is known to work but has some flags that are different than docker and you may have to translate them for your version of podman. -**Omni Certificate** + +Download static binaries of the required tools. `Docker` and `htpasswd` should be provided by your distributions package manager. -Omni uses etcd to store the data for our installation and we need to give it a private key to use for encryption of the etcd database. +`talosctl` -1. First, Generate a GPG key. + +{`curl -L -o talosctl https://github.com/siderolabs/talos/releases/download/${release}/talosctl-linux-amd64\nchmod +x talosctl`} + + +`cfssl` ```bash -gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never +CFSSL_VERSION=$(curl -sI https://github.com/cloudflare/cfssl/releases/latest | grep -i location | awk -F '/' '{print $NF}' | tr -d '\r') +curl -L -o cfssl https://github.com/cloudflare/cfssl/releases/download/${CFSSL_VERSION}/cfssl_${CFSSL_VERSION#v}_linux_amd64 +curl -L -o cfssljson https://github.com/cloudflare/cfssl/releases/download/${CFSSL_VERSION}/cfssljson_${CFSSL_VERSION#v}_linux_amd64 +chmod +x cfssl cfssljson ``` -This will generate a new GPG key pair with the specified properties. - -What's going on here? +`yq` -* `quick-generate-key` allows us to quickly generate a new GPG key pair. -`"Omni (Used for etcd data encryption) how-to-guide@siderolabs.com"` is the user ID associated with the key which generally consists of the real name, a comment, and an email address for the user. -* `rsa4096` specifies the algorithm type and key size. -* `cert` means this key can be used to certify other keys. -* `never` specifies that this key will never expire. +```bash +YQ_VERSION=$(curl -sI https://github.com/mikefarah/yq/releases/latest | grep -i location | awk -F '/' '{print $NF}' | tr -d '\r') +curl -L -o yq "https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_amd64" +chmod +x yq +``` -2. Add an encryption subkey +`crane` +```bash +CRANE_VERSION=$(curl -sI https://github.com/google/go-containerregistry/releases/latest | grep -i location | cut -d/ -f8 | tr -d '\r') +curl -sL "https://github.com/google/go-containerregistry/releases/download/${CRANE_VERSION}/go-containerregistry_Linux_x86_64.tar.gz" | tar -xz crane +chmod +x crane +``` + -We will use the fingerprint of this key to create an encryption subkey. +### Export endpoints -To find the fingerprint of the key we just created, run: +To make this guide easier to follow we will set global variables for each of the endpoints and ports we will use. Update the hostnames and ports if you change any of them from the defaults. ```bash -gpg --list-secret-keys +REGISTRY_ENDPOINT=registry.internal:5000 +FACTORY_ENDPOINT=factory.internal:8080 +AUTH_ENDPOINT=auth.internal:5556 +OMNI_ENDPOINT=omni.internal ``` -Next, run the following command to create the encryption subkey, replacing `$FPR` with your own keys fingerprint. +## 1. Generate Certificates -```bash -gpg --quick-add-key $FPR rsa4096 encr never -``` +In order to run services securely, even in an air gapped environment, you should run with encrypted data in transit and at rest. There are multiple certificates and keys needed to secure your infrastructure. + +* CA certificate (root of trust) +* Domain certificates for the following endpoints + * Omni + * Authentication + * Image factory + * Container registry +* Container signing certificate +* Omni database encryption key -In this command: -* `$FPR` is the fingerprint of the key we are adding the subkey to. -* `rsa4096` and `encr` specify that the new subkey will be an RSA encryption key with a size of 4096 bits. -* `never` means this subkey will never expire. +### Create Root CA certificate (optional) -3. Export the secret key +If you already have a trusted, internal root CA you can skip generating the CA (root of trust). You will need to use your existing CA to create certificates for the services in this guide. Skip to [Generate endpoint certificates](#generate-endpoint-certificates) -Lastly we'll export this key into an ASCII formatted file so Omni can use it. +We will use the `cfssl` command to make the CA and certificate signing easier, but `openssl` can be used if you have existing CA infrastructure. ```bash -gpg --export-secret-key --armor how-to-guide@siderolabs.com > certs/omni.asc +cat < ca-csr.json +{ + "CN": "Internal Root CA", + "key": { + "algo": "rsa", + "size": 4096 + }, + "names": [ + { + "C": "US", + "O": "Internal Infrastructure", + "OU": "Security" + } + ] +} +EOF ``` +With the configuration create a CA certificate. + +```bash +cfssl gencert -initca ca-csr.json | cfssljson -bare ca +``` + +This will give you a private key, `ca-key.pem`, a public key `ca.pem`, and a signing request `ca.csr`. -* `--armor` is an option which creates the output in ASCII format. Without it, the output would be binary. +For any client that will be calling the internal services you will need to install the ca.pem file into your trusted store. -Save this file to the certs directory in our package. + + + On Red Hat and Fedora based distros you can copy the `ca.pem` file into the `/etc/pki/ca-trust/source/anchors/` folder and then run the following command to generate the trusted root store: -#### Create the app.ini File + ```bash + sudo cp ca.pem /etc/pki/ca-trust/source/anchors/ + sudo update-ca-trust + ``` + + + On Ubuntu and Debian based Linux distros you can copy the `ca.pem` file into the `/usr/local/share/ca-certificates/` directory and rename it to `ca.crt` -Gitea uses a configuration file named **app.ini** which we can use to pre-configure with the necessary information to run Gitea and bypass the initial startup page. When we start the container, we will mount this file as a volume using Docker. + ```bash + sudo cp ca.pem /usr/local/share/ca-certificates/ca.crt + sudo update-ca-certificates + ``` + + + For macOS you should open the *Keychain Access* application and drag the `ca.pem` file into the window to install it. + + + On Windows you should rename the certificate extension from `.pem` to `.crt`. You can then double click on the file and select *Install Certificate* -> *Local Machine*. You then need to select "Place all certificates in the following store" and select the *Trusted Root Certification Authorities*. -Create the **app.ini** file + If you're using Windows Subsystem for Linux (WSL) you should follow the Linux guide for installing the certificate. + + + +### Generate endpoint certificates + +Generate a single certificate that all services can use based on the CA we just created. For a production deployment you should generate individual certificates for each service. + +Create a signing configuration to let `cfssl` know we want a web server certificate that should expire in 1 year. ```bash -vim gitea/app.ini +cat < ca-config.json +{ + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "web-server": { + "usages": ["signing", "key encipherment", "server auth"], + "expiry": "8760h" + } + } + } +} +EOF ``` -Replace the `DOMAIN`, `SSH_DOMAIN`, and `ROOT_URL` values with your own hostname: - -```ini -APP_NAME=Gitea: Git with a cup of tea -RUN_MODE=prod -RUN_USER=git -I_AM_BEING_UNSAFE_RUNNING_AS_ROOT=false - -[server] -CERT_FILE=cert.pem -KEY_FILE=key.pem -APP_DATA_PATH=/data/gitea -DOMAIN=${GITEA_HOSTNAME} -SSH_DOMAIN=${GITEA_HOSTNAME} -HTTP_PORT=3000 -ROOT_URL=https://${GITEA_HOSTNAME}:3000/ -HTTP_ADDR=0.0.0.0 -PROTOCOL=https -LOCAL_ROOT_URL=https://localhost:3000/ - -[database] -PATH=/data/gitea/gitea.db -DB_TYPE=sqlite3 - -[security] -INSTALL_LOCK=true # This is the value which tells Gitea not to run the initial configuration wizard on start up +>When using `.internal` domains you will need to update your DNS server or `/etc/hosts` file to make sure the endpoints resolve properly. The file should look something like this. + +```text +# Example config in /etc/hosts +127.0.0.1 localhost registry.internal factory.internal auth.internal omni.internal ``` -> **NOTE:** If running this in a production environment, you will also want to configure the database settings for a production database. This configuration will use an internal sqlite database in the container. +Now create a wildcard certificate for your services. We'll be using the ICANN reserved `.internal` TLD for service domains, but you can use any domain if you have it registered. -#### Gathering Images +```bash +cat < wildcard-csr.json +{ + "CN": "Internal Wildcard", + "hosts": [ + "internal", + "${AUTH_ENDPOINT%:*}", + "${REGISTRY_ENDPOINT%:*}", + "${FACTORY_ENDPOINT%:*}", + "${OMNI_ENDPOINT%:*}", + "127.0.0.1", + "$(hostname -I | awk '{print $1}')" + ], + "key": { + "algo": "rsa", + "size": 4096 + } +} +EOF +``` -Next we will gather all the images needed installing Gitea, Keycloak, Omni, **and** the images Omni will need for creating and installing Talos. +Generate the wildcard certificates for services. -I'll be using the following images for the tutorial: +```bash +cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + -config=ca-config.json \ + -profile=web-server wildcard-csr.json \ + | cfssljson -bare server +``` -* Gitea - * `docker.io/gitea/gitea:1.19.3` -* Keycloak - * `quay.io/keycloak/keycloak:21.1.1` -* Omni - * `ghcr.io/siderolabs/omni:v0.31.0` - * `ghcr.io/siderolabs/imager:v1.4.5` - * pull this image to match the version of Talos you would like to use. -* Talos - * `ghcr.io/siderolabs/flannel:v0.21.4` - * `ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505` - * `docker.io/coredns/coredns:1.10.1` - * `gcr.io/etcd-development/etcd:v3.5.9` - * `registry.k8s.io/kube-apiserver:v1.27.2` - * `registry.k8s.io/kube-controller-manager:v1.27.2` - * `registry.k8s.io/kube-scheduler:v1.27.2` - * `registry.k8s.io/kube-proxy:v1.27.2` - * `ghcr.io/siderolabs/kubelet:v1.27.2` - * `ghcr.io/siderolabs/installer:v1.4.5` - * `registry.k8s.io/pause:3.6` - -> **NOTE**: The Talos images needed may be found using the command `talosctl image default`. If you do not have `talosctl` installed, you may find the instructions on how to install it [here](https://omni.siderolabs.com/docs/how-to-guides/how-to-install-talosctl/). - -**Package the images** - -1. Pull the images to load them locally into Docker. - -* Run the following command for each of the images listed above **except** for the Omni image which will be provided to you as an archive file already. +Create a certificate chain with server and CA. ```bash -sudo docker pull registry/repository/image-name:tag +cat server.pem ca.pem > server-chain.pem ``` -2. Verify all of the images have been downloaded +This will create a private server key, `server-key.pem`, a public server key, `server.pem`, a server signing request, `server.csr`, and a `server-chain.pem`, a server certificate with CA. + +Services in this guide run with different user IDs so we will update the certificates to allow all users to read them. This is not suitable or secure for a production environment. ```bash -sudo docker image ls +chmod 644 server*.pem ``` -3. Save all of the images into an archive file. +## 2. Image factory and container registry -* All of the images can be saved as a single archive file which can be used to load all at once on our airgapped machine with the following command. +Using the certificates we just created, follow the guide [Deploy Image Factory On-prem](./self-hosted/deploy-image-factory-on-prem). This will +create a container registry and host the Image Factory in your environment. It will also sign container images with an offline key for verification. + +If you do not have a working Image Factory with Talos images and extensions seeded do not continue with the guide. That is a pre-requisite for running Omni in an air gapped environment. + +## 3. Authentication + +If you have existing SAML or OIDC authentication available you can use that with Omni. Please see the configuration guides in the [Authentication and Authorization](../security-and-authentication/authentication-and-authorization) section. + +For a PoC environment we will run [dex](https://dexidp.io/docs/) with static users configured. Dex can be used for static configuration or to communicate with upstream providers. For this guide we will configure static users. + +### Deploy Dex (optional) + +Because Omni does not have any user authentication Dex will be configured so we can log in to Omni with a static user. You will need to download the dex container from a machine that has internet access and push it to your internal registry. + +#### Download dex + +Download the dex container image. ```bash -docker save -o image-tarfile.tar \ - list \ - of \ - images +docker pull ghcr.io/dexidp/dex:v2.41.1 ``` -Here is an example of the command used for the images in this tutorial: +If your machine has access to the internal registry you can push the image directly. ```bash -docker save -o registry/all_images.tar \ - docker.io/gitea/gitea:1.19.3 \ - quay.io/keycloak/keycloak:21.1.1 \ - ghcr.io/siderolabs/imager:v1.4.5 \ - ghcr.io/siderolabs/flannel:v0.21.4 \ - ghcr.io/siderolabs/install-cni:v1.4.0-1-g9b07505 \ - docker.io/coredns/coredns:1.10.1 \ - gcr.io/etcd-development/etcd:v3.5.9 \ - registry.k8s.io/kube-apiserver:v1.27.2 \ - registry.k8s.io/kube-controller-manager:v1.27.2 \ - registry.k8s.io/kube-scheduler:v1.27.2 \ - registry.k8s.io/kube-proxy:v1.27.2 \ - ghcr.io/siderolabs/kubelet:v1.27.2 \ - ghcr.io/siderolabs/installer:v1.4.5 \ - registry.k8s.io/pause:3.6 +docker tag ghcr.io/dexidp/dex:v2.41.1 ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 +docker push ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 ``` -#### Move Dependencies +If you need to send the image to a remote machine or transfer it via an offline method you can export the container image. -Now that we have all the packages necessary for the airgapped deployment of Omni, we'll create a compressed archive file and move it to our airgapped network. +```bash +docker save -o dex.tar ghcr.io/dexidp/dex:v2.41.1 +``` -The directory structure should look like this now: +On the remote machine load the archive into the local image storage and then push it to the registry. ```bash -airgap -├── certs -│ ├── fullchain.pem -│ ├── omni.asc -│ └── privkey.pem -├── gitea -│ └── app.ini -├── keycloak -├── omni -└── registry - ├── omni-image.tar # Provided to you by Sidero Labs - └── all_images.tar +docker load -i dex.tar +docker push ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 ``` -Create a compressed archive file to move to our airgap machine. +#### Configure dex + +We will create an example dex configuration to use with Omni. + +Create a password. Make sure you remember this password for logging in later. ```bash -cd ../ -tar czvf omni-airgap.tar.gz airgap/ +export OMNI_USER_PASSWORD=$(htpasswd -BnC 15 user42 | cut --delimiter=: --fields=1 --complement) ``` -Now I will use scp to move this file to my machine which does not have internet access. Use whatever method you prefer to move this file. +Create a dex configuration file. ```bash -scp omni-airgap.tar.gz $USERNAME@$AIRGAP_MACHINE:/home/$USERNAME/ +cat < dex.yaml +issuer: https://${AUTH_ENDPOINT} +storage: + type: memory + +web: + https: 0.0.0.0:5556 + tlsCert: /etc/dex/tls/server-chain.pem + tlsKey: /etc/dex/tls/server-key.pem + +enablePasswordDB: true + +staticClients: + - name: Omni + id: omni + secret: aW50ZXJuYWwtc2lkZXJvLXN0YWNrCg== #internal-sidero-stack + redirectURIs: [https://omni.internal/oidc/consume] + +staticPasswords: + - email: "admin@omni.internal" + username: "admin" + preferredUsername: "admin" + hash: "$OMNI_USER_PASSWORD" +EOF ``` -Lastly, I will log in to my airgapped machine and extract the compressed archive file in the home directory +#### Run dex + +Run dex with the provided configuration and certificate. ```bash -cd ~/ -tar xzvf omni-airgap.tar.gz +docker run -d \ + --name dex \ + -p 5556:5556 \ + -v $(pwd)/dex.yaml:/etc/dex/dex.yaml \ + -v $(pwd)/server-key.pem:/etc/dex/tls/server-key.pem \ + -v $(pwd)/server-chain.pem:/etc/dex/tls/server-chain.pem \ + ${REGISTRY_ENDPOINT}/dexidp/dex:v2.41.1 \ + dex serve /etc/dex/dex.yaml ``` -### Log in Airgapped Machine +>If your machine has SELinux in enforcing mode you may need to add `:Z` to the volume mounts in the docker command. + +## 4. Run Omni -From here on out, the rest of the tutorial will take place from the airgapped machine we will be installing Omni, Keycloak, and Gitea on. +Omni will depend on the following URLs. If these services are not running or the hostnames do not resolve you may need to add them to your /etc/hosts file. -### Gitea +* omni.internal +* factory.internal +* registry.internal +* auth.internal -Gitea will be used as a container registry for storing our images, but also many other functionalities including Git, Large File Storage, and the ability to store packages for many different package types. For more information on what you can use Gitea for, visit their [documentation](https://docs.gitea.com/). +### Create etcd encryption key -#### Install Gitea +Data in Omni's database is encrypted and we need an encryption key to provide to Omni. -Load the images we moved over. This will load all the images into Docker on the airgapped machine. +Generate a GPG key: ```bash -docker load -i registry/omni-image.tar -docker load -i registry/all_images.tar +gpg --quick-generate-key "Omni (Used for etcd data encryption) how-to-guide@siderolabs.com" rsa4096 cert never +FINGERPRINT=$(gpg --with-colons --list-keys "how-to-guide@siderolabs.com" | awk -F: '$1 == "fpr" {print $10; exit}') +gpg --quick-add-key ${FINGERPRINT} rsa4096 encr never +gpg --export-secret-key --armor how-to-guide@siderolabs.com > omni.asc ``` -Run Gitea using Docker: +>Note: Do not add passphrases to keys during creation. -* The **app.ini** file is already configured and mounted below with the `- v` argument. +### Download Omni -```bash -sudo docker run -it \ - -v $PWD/certs/privkey.pem:/data/gitea/key.pem \ - -v $PWD/certs/fullchain.pem:/data/gitea/cert.pem \ - -v $PWD/gitea/app.ini:/data/gitea/conf/app.ini \ - -v $PWD/gitea/data/:/data/gitea/ \ - -p 3000:3000 \ - gitea/gitea:1.19.3 -``` +Download the Omni container image. + + +{`docker pull ghcr.io/siderolabs/omni:${omni_release}\ndocker tag ghcr.io/siderolabs/omni:${omni_release} \\\n \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + + +If your machine has access to the internal registry you can push the image directly. -You may now log in at the `https://${GITEA_HOSTNAME}:3000` to begin configuring Gitea to store all the images needed for Omni and Talos. + +{`docker push \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -#### Gitea setup +If you need to send the image to a remote machine or transfer it via an offline method you can export the container image. -This is just the bare minimum setup to run Omni. Gitea has many additional configuration options and security measures to use in accordance with your industry's security standards. More information on the configuration of Gitea can be found [here](https://docs.gitea.com/). + +{`docker save -o omni.tar \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -**Create a user** +On the remote machine load the archive into the local image storage and then push it to the registry. -Click the **Register** button at the **top right** corner. The first user created will be created as an administrator - permissions can be adjusted afterwards if you like. + +{`docker load -i omni.tar\ndocker push \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release}`} + -**Create organizations** +### Start Omni container -After registering an admin user, the organizations can be created which will act as the package repositories for storing images. Create the following organizations: +This will run Omni with an embedded etcd database mounted to the host. It is not recommended for production use cases. The command assumes the certificates generated earlier are available in the local directory where you run this command. -* `siderolabs` -* `keycloak` -* `coredns` -* `etcd-development` -* `registry-k8s-io-proxy` + +{`docker run \\\n --name omni \\\n -d --net=host \\\n --cap-add=NET_ADMIN \\\n --device /dev/net/tun:/dev/net/tun \\\n -v "\${PWD}/ca.pem:/etc/ssl/certs/ca-certificates.crt:ro" \\\n -v "\${PWD}/etcd:/_out/etcd" \\\n -v "\${PWD}/sqlite:/_out/sqlite:rw" \\\n -v "\${PWD}/server-key.pem:/server-key.pem:ro" \\\n -v "\${PWD}/server-chain.pem:/server-chain.pem:ro" \\\n -v "\${PWD}/omni.asc:/omni.asc:ro" \\\n \${REGISTRY_ENDPOINT}/siderolabs/omni:${omni_release} \\\n --name=air-gap-omni \\\n --cert=/server-chain.pem \\\n --key=/server-key.pem \\\n --siderolink-api-cert=/server-chain.pem \\\n --siderolink-api-key=/server-key.pem \\\n --private-key-source=file:///omni.asc \\\n --event-sink-port=8091 \\\n --bind-addr=0.0.0.0:443 \\\n --siderolink-api-bind-addr=0.0.0.0:8090 \\\n --k8s-proxy-bind-addr=0.0.0.0:8100 \\\n --advertised-api-url=https://omni.internal \\\n --siderolink-api-advertised-url=https://omni.internal:8090 \\\n --siderolink-wireguard-advertised-addr=\$(hostname -I | awk '{print \$1}'):50180 \\\n --advertised-kubernetes-proxy-url=https://omni.internal:8100 \\\n --auth-auth0-enabled=false \\\n --auth-oidc-enabled=true \\\n --auth-oidc-client-secret=aW50ZXJuYWwtc2lkZXJvLXN0YWNrCg== \\\n --auth-oidc-provider-url=https://\${AUTH_ENDPOINT} \\\n --auth-oidc-client-id=omni \\\n --auth-oidc-scopes=openid,profile,email \\\n --image-factory-address=https://\${FACTORY_ENDPOINT} \\\n --initial-users=admin@omni.internal \\\n --kubernetes-registry=\${REGISTRY_ENDPOINT}/siderolabs/kubelet \\\n --sqlite-storage-path=/_out/sqlite/omni.db \\\n --talos-installer-registry=\${REGISTRY_ENDPOINT}/siderolabs/installer \\\n --workload-proxying-enabled=false \\\n --metrics-bind-addr=0.0.0.0:2123`} + -> **NOTE:** If you are using self-signed certs and would like to push images to your local Gitea using Docker, you will also need to configure your certs.d directory as described [here](https://docs.docker.com/engine/security/certificates/). +>We changed the `--metrics-bind-addr` to use port `:2123` to avoid port conflicts with Image Factory (if it's running on the same host). -#### Push Images to Gitea +These flags mount the files and directories needed by Omni (e.g. certificates, etcd storage) and set flags to connect Omni to the upstream services (e.g. factory, authentication). -Now that all of our organizations have been created, we can push the images we loaded into our Gitea for deploying Keycloak, Omni, and storing images used by Talos. +## 5. Create a cluster -For all of the images loaded, we first need to tag them for our Gitea. +Before you create a cluster you will need to generate installation media. When creating a cluster you'll need to make sure you patch the machine config to redirect container registries to your internal registry. + +### Download Kubernetes containers + +Seed the internal registry with Kubernetes container images. ```bash -sudo docker tag original-image:tag gitea:3000/new-image:tag +talosctl images k8s-bundle > k8s-images.txt ``` -For example, if I am tagging the kube-proxy image it will look like this: +Download the images and push them into the internal registry. + + + -> **NOTE:** Don't forget to tag all of the images from **registry.k8s.io** to go to the **registry-k8s-io-proxy** organization created in Gitea. +If your machine can reach the public internet and the internal registry at the same time you can copy the images internally with this command. ```bash -docker tag registry.k8s.io/kube-proxy:v1.27.2 ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2 +for SOURCE_IMAGE in $(cat images.txt) + do + IMAGE_WITHOUT_DIGEST=${SOURCE_IMAGE%%@*} + IMAGE_WITH_NEW_REG="${REGISTRY_ENDPOINT}/${IMAGE_WITHOUT_DIGEST#*/}" + crane copy \ + $SOURCE_IMAGE \ + $IMAGE_WITH_NEW_REG +done ``` + + -Finally, push all the images into Gitea. +If you don't have direct access to an internal container registry (e.g. air gapped environment) you need to download the container images while connected to the internet with this command: ```bash -docker push ${GITEA_HOSTNAME}:3000/registry-k8s-io-proxy/kube-proxy:v1.27.2 +cat images.txt \ + | talosctl images cache-create \ + --layout flat \ + --image-cache-path ./image-cache \ + --images=- ``` -### Keycloak +Move the `image-cache` folder to an air gapped machine and serve the images on a read only, temporary container registry with: -#### Install Keycloak +```bash +IP=$(hostname -I | awk '{print $1}') -The image used for keycloak is already loaded into Gitea and there are no files to stage before starting it so I'll run the following command to start it. **Replace KEYCLOAK\_HOSTNAME and GITEA\_HOSTNAME with your own hostnames**. +talosctl image cache-cert-gen \ + --advertised-address $IP -```bash -sudo docker run -it \ - -p 8080:8080 \ - -p 8443:8443 \ - -v $PWD/certs/fullchain.pem:/etc/x509/https/tls.crt \ - -v $PWD/certs/privkey.pem:/etc/x509/https/tls.key \ - -v $PWD/keycloak/data:/opt/keycloak/data \ - -e KEYCLOAK_ADMIN=admin \ - -e KEYCLOAK_ADMIN_PASSWORD=admin \ - -e KC_HOSTNAME=${KEYCLOAK_HOSTNAME} \ - -e KC_HTTPS_CERTIFICATE_FILE=/etc/x509/https/tls.crt \ - -e KC_HTTPS_CERTIFICATE_KEY_FILE=/etc/x509/https/tls.key \ - ${GITEA_HOSTNAME}:3000/keycloak/keycloak:21.1.1 \ - start +talosctl image cache-serve \ + --address $IP:5000 \ + --image-cache-path ./image-cache \ + --tls-cert-file tls.crt \ + --tls-key-file tls.key ``` -Once Keycloak is installed, you can reach it in your browser at `https://${KEYCLOAK\_HOSTNAME}:3000` +A temporary image registry will run on your local machine IP address port 5000 with self-signed certificates. Copy the images to an internal, permanent container registry. -#### Configuring Keycloak - -For details on configuring Keycloak as a SAML Identity Provider to be used with Omni, follow this guide: [Configuring Keycloak SAML](../infrastructure-and-extensions/self-hosted/configure-keycloak-for-omni) +```bash +for SOURCE_IMAGE in $(cat images.txt) + do + IMAGE_WITHOUT_DIGEST=${SOURCE_IMAGE%%@*} + IMAGE_WITH_NEW_REG="${REGISTRY_ENDPOINT}/${IMAGE_WITHOUT_DIGEST#*/}" + LOCALHOST_IMAGE="localhost:5000/${IMAGE_WITHOUT_DIGEST#*/}" + crane copy \ + $LOCALHOST_IMAGE \ + $IMAGE_WITH_NEW_REG +done +``` + + -### Omni -With Keycloak and Gitea installed and configured, we're ready to start up Omni and start creating and managing clusters. +### Create installation media -#### Install Omni +The first step to create a cluster is to boot machines and connect them to Omni. In order to do that we will need to embed the self-signed CA certificate into Talos. -To install Omni, first generate a UUID to pass to Omni when we start it. +Create a configuration for a TrustedRootsConfig: ```bash -export OMNI_ACCOUNT_UUID=$(uuidgen) +yq eval --null-input ' +.apiVersion = "v1alpha1" | +.kind = "TrustedRootsConfig" | +.name = "internal-ca" | +.certificates = load_str("ca.pem") +' > trustedrootsconfig.yaml ``` -Next run the following command, replacing hostnames for Omni, Gitea, or Keycloak with your own. +You can use this config two different ways. Use kernel arguments for Talos 1.11 and older and use embedded config for Talos 1.12. -```bash -sudo docker run \ - --net=host \ - --cap-add=NET_ADMIN \ - -v $PWD/etcd:/_out/etcd \ - -v $PWD/certs/fullchain.pem:/fullchain.pem \ - -v $PWD/certs/privkey.pem:/privkey.pem \ - -v $PWD/certs/omni.asc:/omni.asc \ - ${GITEA_HOSTNAME}:3000/siderolabs/omni:v0.12.0 \ - --account-id=${OMNI_ACCOUNT_UUID} \ - --name=omni \ - --cert=/fullchain.pem \ - --key=/privkey.pem \ - --siderolink-api-cert=/fullchain.pem \ - --siderolink-api-key=/privkey.pem \ - --private-key-source=file:///omni.asc \ - --event-sink-port=8091 \ - --bind-addr=0.0.0.0:443 \ - --siderolink-api-bind-addr=0.0.0.0:8090 \ - --k8s-proxy-bind-addr=0.0.0.0:8100 \ - --advertised-api-url=https://${OMNI_HOSTNAME}:443/ \ - --siderolink-api-advertised-url=https://${OMNI_HOSTNAME}:8090/ \ - --siderolink-wireguard-advertised-addr=${OMNI_HOSTNAME}:50180 \ - --advertised-kubernetes-proxy-url=https://${OMNI_HOSTNAME}:8100/ \ - --auth-auth0-enabled=false \ - --auth-saml-enabled \ - --talos-installer-registry=${GITEA_HOSTNAME}:3000/siderolabs/installer \ - --talos-imager-image=${GITEA_HOSTNAME}:3000/siderolabs/imager:v1.4.5 \ - --kubernetes-registry=${GITEA_HOSTNAME}:3000/siderolabs/kubelet \ - --auth-saml-url "https://${KEYCLOAK_HOSTNAME}:8443/realms/omni/protocol/saml/descriptor" -``` + + + Create an output directory for installation media and config. + + ```bash + mkdir _out + mv trustedrootsconfig.yaml _out/machine-config.yaml + echo "---" >> _out/machine-config.yaml + ``` + + Download machine join configuration from Omni. You can do this from the Omni web interface home page by clicking on the **Download Machine Join Config** button or if you have `omnictl` installed you can download it with + + ```bash + omnictl jointoken machine-config >> _out/machine-config.yaml + ``` + + Create a static hosts configuration for Omni and the registry. + + ```bash + yq --null-input ' + .apiVersion = "v1alpha1" | + .kind = "StaticHostConfig" | + .name = "'$(hostname -I | awk '{print $1}')'" | + .hostnames = ["'${OMNI_ENDPOINT%:*}'", "'${REGISTRY_ENDPOINT%:*}'"] +' > hosts-config.yaml + ``` + Append this configuration to the others. + + ```bash + echo "---" >> _out/machine-config.yaml + cat hosts-config.yaml >> _out/machine-config.yaml + ``` + + Embed both configurations and create an installation ISO with `imager`. + + + {`docker run --rm -t \\\n -v "\${PWD}/_out:/out" \\\n --privileged \\\n \${REGISTRY_ENDPOINT}/siderolabs/imager:${release} \\\n iso \\\n --embedded-config-path=/out/machine-config.yaml`} + -What's going on here: + + -* `--auth-auth0-enabled=false` tells Omni not to use Auth0. -* `--auth-saml-enabled` enables SAML authentication. -* `--talos-installer-registry`, `--talos-imager-image` and `--kubernetes-registry` allow you to set the default images used by Omni to point to your local repository. -* `--auth-saml-url` is the URL we saved earlier in the configuration of Keycloak. - * `--auth-saml-metadata` may also be used if you would like to pass it as a file instead of a URL and can be used if using self-signed certificates for Keycloak. + Get the kernel arguments from Omni with `omnictl`. You can also copy them from the Omni web interface with the **Copy Kernel Parameters** button on the home page. -#### Creating a cluster + ```bash + OMNI_KERNEL_ARGS=$(omnictl jointoken kernel-args) + ``` -Guides on creating a cluster on Omni can be found here: + Compress and base64 encode the configuration for a kernel argument. -* [Creating an Omni cluster](../getting-started/create-a-cluster) + ```bash + TRUSTED_ROOT_CONFIG=$(cat trustedrootsconfig.yaml | zstd --compress --ultra -21 | base64 -w 0) + ``` -Because we're working in an airgapped environment we will need the following values added to our cluster configs so they know where to pull images from. More information on the Talos MachineConfig.registries can be found [here](https://www.talos.dev/latest/talos-guides/discovery/). + Create an ISO with `imager` with both of the kernel arguments. -> **NOTE:** In this example, cluster discovery is also disabled. You may also configure cluster discovery on your network. More information on the Discovery Service can be found [here](https://www.talos.dev/latest/talos-guides/discovery/) + + {`docker run --rm -t \\\n -v "\${PWD}/_out:/out" \\\n --privileged \\\n \${REGISTRY_ENDPOINT}/siderolabs/imager:${release} \\\n iso \\\n --extra-kernel-arg "talos.config.early=\$TRUSTED_ROOT_CONFIG $OMNI_KERNEL_ARGS"`} + + + + + +No matter which version of Talos you use you should have a Talos iso file in the `_out` directory. You can use this to boot a machine and it will connect to Omni and trust the self-signed CA certificate. + +### Create cluster in Omni + +Guides on creating a cluster on Omni can be found at [creating an Omni cluster](../getting-started/create-a-cluster). + +Because we're working in an airgapped environment we will need the following values added to our cluster configs so they know where to pull images from. We also need to provide the CA certificate to the node so it will trust the certificate that signed `omni.internal` endpoint. + +> **NOTE:** In this example, cluster discovery is also disabled. You may also configure cluster discovery via Omni. More information on the Discovery Service can be found here. ```yaml machine: @@ -425,26 +569,28 @@ machine: mirrors: docker.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} gcr.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} ghcr.io: endpoints: - - https://${GITEA_HOSTNAME}:3000 + - https://${REGISTRY_ENDPOINT} registry.k8s.io: endpoints: - - https://${GITEA_HOSTNAME}:3000/v2/registry-k8s-io-proxy - overridePath: true + - https://${REGISTRY_ENDPOINT} cluster: discovery: enabled: false +--- +apiVersion: v1alpha1 +kind: RegistryTLSConfig +name: ${REGISTRY_ENDPOINT} +ca: |- + -----BEGIN CERTIFICATE----- + MIID...IDAQAB + -----END CERTIFICATE----- ``` -Specifics on patching machines can be found here: - -* [Create a Patch for Cluster Machines](../omni-cluster-setup/create-a-patch-for-cluster-machines) - -### Closure +The machine patch should be added to the cluster during cluster creation and should be applied cluster wide. -With Omni, Gitea, and Keycloak set up, you are ready to start managing and installing Talos clusters on your network! The suite of applications installed in this tutorial is an example of how an airgapped environment can be set up to make the most out of the Kubernetes clusters on your network. Other container registries or authentication providers may also be used with a similar setup, but this suite was chosen to give you a starting point and an example of what your environment could look like. diff --git a/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx b/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx index bff26430..465a0a62 100644 --- a/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx +++ b/public/omni/infrastructure-and-extensions/self-hosted/deploy-image-factory-on-prem.mdx @@ -2,37 +2,87 @@ title: Deploy Image Factory On-prem --- +import { release } from '/snippets/custom-variables.mdx'; + The [Image Factory](https://github.com/siderolabs/image-factory) is a way for you to dynamically create Talos Linux images. There is a public, hosted version of the Image Factory at [factory.talos.dev](https://factory.talos.dev) and it can also be run in your environment. -The Image Factory is a critical component of [Omni](../../overview/what-is-omni) to generate installation media and update Talos nodes, but it is not required to use Omni to use the Image Factory. It is a web interface and API for the `imager` command which is used to customize Talos from the command line. +The Image Factory is a critical component of [Omni](../../overview/what-is-omni) to generate installation media and update Talos nodes, but it is not required to use Omni to use the Image Factory. It is a web interface and API for the `imager` command which is used to customize Talos from the command line. ## Prerequisites * Machine to run Image Factory -* Container registry (with Talos images) -* Image cache signing key -* Image cache storage (optional) +* [`crane`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) +* `docker` or `podman` + +>Podman is known to work but has some flags that are different than docker and you may have to translate them for your version of podman. -### Container Registry +### Container registry -If you already have a container registry available you can export your registry to an environment variable. +If you already have a container registry available you can export your registry to an environment variable. and skip to create an [image cache signing key](#image-cache-signing-key). ```shell -INTERNAL_REG= +REGISTRY_ENDPOINT=registry.internal:5000 ``` -If you don't have a container registry available to push images to you can temporarily run one with the `zot` container. +If you don't have a container registry available to push images to you can temporarily run one with the `registry` container. We recommend using the official `registry:2` registry from docker as some registries do not support all OCI images. This example doesn't have persistent storage. + + + We recommend using certificates for your temporary registry you will need to provide your own certificates and mount them into the container at run time. If you do not have certificates, follow the steps in the [Omni air-gapped documentation](../install-airgapped-omni#1-generate-certificates). + + ```bash + docker run -d \ + --name registry \ + -p 5000:5000 \ + -v ${PWD}/server-key.pem:/certs/server-key.pem:ro \ + -v ${PWD}/server-chain.pem:/certs/server-chain.pem:ro \ + -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ + -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem \ + -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-chain.pem \ + registry:2 + ``` + Make sure the CA certificate is in your system pki path and `docker` has restarted to trust the certificate. + + + A registry can be run without certificates or encrypted communication. Running this way will require your to add a flag to `crane` and `docker` to allow insecure communication. +```bash +docker run -d -p 5000:5000 --name registry registry:2 +``` + + + + +Without internet access you will need to download the container image, transfer it internally, and load it on the target machine. + ```bash -docker run -d -p 5000:5000 --name zot \ - ghcr.io/project-zot/zot:latest +docker save -o registry.tar registry:2 +``` +Transfer the `registry.tar` file to an internal system. +``` +docker load -i registry.tar +``` +Run the registry with certificates. + +If SELinux is enabled replace `:ro` with `:Z`. -INTERNAL_REG=127.0.0.1:5000 +```bash +docker run -d \ + --name registry \ + -p 5000:5000 \ + -v ${PWD}/server-key.pem:/certs/server-key.pem:ro \ + -v ${PWD}/server-chain.pem:/certs/server-chain.pem:ro \ + -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \ + -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem \ + -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-chain.pem \ + registry:2 ``` -### Image Cache Signing Key + + + +### Image cache signing key You need to create a Cache Signing Key to sign cached Talos image artifacts, ensuring they haven’t been tampered with before being served. @@ -41,89 +91,104 @@ You need to create a Cache Signing Key to sign cached Talos image artifacts, ens openssl ecparam -name prime256v1 -genkey -noout -out signing-key.key ``` -### Image Cache Storage +### Image cache storage (optional) -There are a variety of image cache locations to store built images. Without an image cache each asset will be built when requested which can consume a high amount of CPU on the image factory machine. +There are a variety of image cache locations to store built images. Without an image cache, each asset will be built on demand which can consume a high amount of CPU on the image factory machine. Some supported cache storage options include: * CDN -* s3 bucket (or compatable API) +* s3 bucket (or compatible API) Please view the `--help` output for cache options. -## Image Factory with the upstream container registry +## Run Image Factory + +There are two supported methods to run the Image Factory: + +* Connected to the upstream Sidero container registry +* Using a custom container registry + +A custom container registry is required for air-gapped environments or custom Talos builds. + + + + +Run with the official, upstream container registry if your machine is connected to the internet and you don't need custom Talos images. The official Sidero Labs registry has all of the required Talos installation containers, extensions, and tools. If you want to run image factory connected to the upstream container registry you can do it with: -### Run Image Factory - ```shell docker run -p 8080:8080 -d \ --name image-factory \ -v $PWD/signing-key.key:/signing-key.key:ro \ ghcr.io/siderolabs/image-factory:v0.9.0 \ -cache-signing-key-path /signing-key.key \ - -schematic-service-repository $INTERNAL_REG/siderolabs/image-factory/schematic + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic ``` If your system has SELinux enabled you will need to mount the signing key with the :Z option so the image factory has access to the file. -This will run the image factory on your machine on port 8080 and pull container images from Sidero Labs ghcr.io registry. It will also validate image signatures using [`cosign`](https://edu.chainguard.dev/open-source/sigstore/cosign/an-introduction-to-cosign/) to validate pulled images. +This will run the image factory on your machine on port 8080 and automatically pull container images from Sidero's registry. It will also validate image signatures using [`cosign`](https://edu.chainguard.dev/open-source/sigstore/cosign/an-introduction-to-cosign/) to validate pulled images. This will not allow you to create or publish custom system extensions. -To do that you will need to run your own container registry with the necessary images. - -## Image Factory with Custom Container Registry +To do that you will need to run your own container registry with the necessary images. See the **disconnected** instructions for Image Factory. + + -Running the image factory in an air airgapped environment has more requirements than running in a connected mode. In addition to the requirements above you will also need. +Run with an internal container registry if your machine is not connected to the internet, or you need custom Talos images and extensions. -* Certificates for web frontend and registry -* Cosign image signing key +Running the image factory in an air-gapped environment has more requirements than running in a connected mode. Make sure you have a registry running from the [Internal container registry](#container-registry) section. You will need to download container images and seed them into the internal registry and sign the container images. -You will need to run a container registry in your environment. Any OCI compatable registry should work. For example purposes we will run a pull only image registry built into `talosctl` -This is just an example and should not be used in a production environment. If you want to test locally on your mahcine you can also see the [developer documentation](https://github.com/siderolabs/image-factory#air-gapped-mode) in the repository. +>This is just an example and should not be used in a production environment. If you want to test locally on your machine you can also see the [developer documentation](https://github.com/siderolabs/image-factory#air-gapped-mode) in the repository. -### Download Required Container Images +### Download container images Starting with Talos 1.12 you can get a list of images needed to seed the image factory directly from `talosctl`. Get a list of Talos base images needed for the image factory with: -```shell -talosctl image talos-bundle v1.11.5 > images.txt -``` + +{`talosctl image talos-bundle ${release} > images.txt`} + -This will give you a list of all images and extensions needed to build Talos 1.11.5. +This will give you a list of all images and extensions for Talos {release}. You will need to repeat this command for each version of Talos you want to download images for. -If you don't need specific extensions you can delete them from the images.txt file. +If you don't need specific extensions you can delete them from the `images.txt` file. + +Push the images to your `$REGISTRY_ENDPOINT` + + + -Replace the registry in each image with your registry or use `localhost:5000` if running a registry locally, and push the images. +If your machine can reach the public internet and the internal registry at the same time you can copy the images internally with this command. ```bash for SOURCE_IMAGE in $(cat images.txt) do IMAGE_WITHOUT_DIGEST=${SOURCE_IMAGE%%@*} - IMAGE_WITH_NEW_REG="${INTERNAL_REG}/${IMAGE_WITHOUT_DIGEST#*/}" + IMAGE_WITH_NEW_REG="${REGISTRY_ENDPOINT}/${IMAGE_WITHOUT_DIGEST#*/}" crane copy \ $SOURCE_IMAGE \ $IMAGE_WITH_NEW_REG done ``` + + -If you don't have an existing container registry or you _only_ want to download the container images you can download them to a folder with the command: +If you don't have direct access to an internal container registry (e.g. air gapped environment) you need to download the container images while connected to the internet with this command: ```bash cat images.txt \ - talosctl images cache-create \ - --layout flat \ - --image-cache-path ./image-cache \ - --images=- + | talosctl images cache-create \ + --layout flat \ + --image-cache-path ./image-cache \ + --images=- ``` -You will then be able to move the `image-cache` folder to an air gapped machine and serve the images on a read only, temporary container registry with: +Move the `image-cache` folder to an air gapped machine and serve the images on a read only, temporary container registry with: ```bash IP=$(hostname -I | awk '{print $1}') @@ -138,11 +203,26 @@ talosctl image cache-serve \ --tls-key-file tls.key ``` -The image registry should be on your local machine IP address port 5000 with self-signed certificates. You can use this to copy the images to an internal, permanent container registry. +A temporary image registry will run on your local machine IP address port 5000 with self-signed certificates. Copy the images to an internal, permanent container registry. + +```bash +for SOURCE_IMAGE in $(cat images.txt) + do + IMAGE_WITHOUT_DIGEST=${SOURCE_IMAGE%%@*} + IMAGE_WITH_NEW_REG="${REGISTRY_ENDPOINT}/${IMAGE_WITHOUT_DIGEST#*/}" + LOCALHOST_IMAGE="localhost:5000/${IMAGE_WITHOUT_DIGEST#*/}" + crane copy \ + $LOCALHOST_IMAGE \ + $IMAGE_WITH_NEW_REG +done +``` + + -### Sign Container Images -The Image Factory verifies container image signatures when being used. You will need to generate a cosign singing key and sign each container pushed to the registry. +### Sign container images + +The Image Factory verifies container image signatures when being used. Generate a cosign singing key and sign each container pushed to the registry. Image factory currently only supports `cosign` v2 signatures. @@ -157,18 +237,24 @@ docker run --rm -it \ generate-key-pair ``` -Now sign each image and tag in your internal registry. This will allow the registry to validate images without reaching out to any external services for key validation. +Sign each image and tag in your internal registry. This will allow the registry to validate images without reaching out to any external services for key validation. ```bash KEY_FILE="cosign.key" export COSIGN_PASSWORD="" # Leave empty for empty password ``` -Sign all of the images using the images.txt file as a list. +Sign all of the images using the `images.txt` file as a list. + +{/* Sign container images */} + + + +If your registry is running with a globally trusted certificate (e.g. signed by lets encrypt) you can sign the images with the following command: ```bash for IMAGE in $(cat images.txt) do - NEW_IMAGE="${INTERNAL_REG}/${IMAGE#*/}" + NEW_IMAGE="${REGISTRY_ENDPOINT}/${IMAGE#*/}" if [[ "$NEW_IMAGE" != *"@sha256:"* ]]; then NEW_IMAGE="${NEW_IMAGE}@$(crane digest $NEW_IMAGE)" fi @@ -179,52 +265,170 @@ for IMAGE in $(cat images.txt) --user $(id -u):$(id -g) \ ghcr.io/sigstore/cosign/cosign:v2.6.1 \ sign --key /keys/$KEY_FILE \ + --tlog-upload=false \ $NEW_IMAGE done ``` + + -### Run Image Factory +If your registry is running with a self-signed CA certificate (i.e. from the [Installing Airgapped Omni](../install-airgapped-omni) guide) you need to mount the CA certificate into the cosign container for it to be trusted. + +```bash +for IMAGE in $(cat images.txt) + do + NEW_IMAGE="${REGISTRY_ENDPOINT}/${IMAGE#*/}" + if [[ "$NEW_IMAGE" != *"@sha256:"* ]]; then + NEW_IMAGE="${NEW_IMAGE}@$(crane digest $NEW_IMAGE)" + fi + docker run --rm -it --net=host \ + -v $PWD:/keys -w /keys \ + -v "$PWD/ca.pem:/etc/ssl/certs/ca-certificates.crt:ro" \ + -e COSIGN_PASSWORD="" \ + -e COSIGN_YES=true \ + --user $(id -u):$(id -g) \ + ghcr.io/sigstore/cosign/cosign:v2.6.1 \ + sign --key /keys/$KEY_FILE \ + --tlog-upload=false \ + $NEW_IMAGE +done +``` + + +{/* Sign container images */} With a populated container registry and signed images you are ready to run the Image Factory. -If the container registry and image factory are run on the same machine in containers `localhost` won't be reachable unless you run each container with `--net=host` which is not recommended. An alternative approach would be to use [private Docker networking](https://docs.docker.com/engine/network/) to bridge the containers. +Set a internal factory endpoint. +```bash +FACTORY_URL=https://factory.internal:8080 +``` + +This guide assumes the container registry and image factory are running on the same machine. Because of this we will run the Image Factory with `--net=host` which is not recommended for a production, multi-host deployment. + + + +To run the image factory with a trusted certificate you can use the following command. ```bash docker run -p 8080:8080 -d \ --name image-factory \ + --net=host \ -v $PWD/signing-key.key:/signing-key.key:ro \ -v $PWD/cosign.pub:/cosign.pub:ro \ ghcr.io/siderolabs/image-factory:v0.9.0 \ - -image-registry $INTERNAL_REG \ - -installer-internal-repository $INTERNAL_REG/siderolabs \ - -installer-internal-repository $INTERNAL_REG/siderolabs \ - -schematic-service-repository $INTERNAL_REG/siderolabs/image-factory/schematic \ - -cache-repository $INTERNAL_REG/siderolabs/cache \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ -cache-signing-key-path /signing-key.key \ -container-signature-pubkey /cosign.pub \ -cache-cdn-enabled=false \ -cache-s3-enabled=false ``` -## Insecure -If your image factory does not have certificates or does not have certificates trusted by the image factory you should add the following flags. -```bash - -insecure-image-registry \ - -insecure-installer-internal-repository \ - -insecure-schematic-service-repository -``` + + + +To run the image factory with a self-signed CA certificate you need to mount them into the container image at run time. -If you are running on a server with SELinux enabled and enforcing then volumes mounted into the container will not be available unless you append `:Z` to the volume mounts. + + -If you have an internal, private certificate authority you will need to mount that into the Image Factory image so it can trust the registry certificate. Mount it into the container by adding `-v /etc/pki/ca-trust/source/anchors:/etc/ssl/certs:ro` to the Image Factory commands. +If you are running on a server with SELinux enabled and enforcing then volumes mounted into the container will not be available unless you append `:Z` to the volume mounts. +```bash +docker run -p 8080:8080 -d \ + --name image-factory \ + --net=host \ + -v /dev:/dev \ + --privileged \ + -v $PWD/signing-key.key:/signing-key.key:ro \ + -v $PWD/cosign.pub:/cosign.pub:ro \ + -v $PWD/server-chain.pem:/certs/server-chain.pem:ro \ + -v $PWD/server-key.pem:/certs/server-key.pem:ro \ + -v /etc/pki/ca-trust/source/anchors/:/etc/ssl/certs:ro \ + ghcr.io/siderolabs/image-factory:v0.9.0 \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ + -cache-signing-key-path /signing-key.key \ + -container-signature-pubkey /cosign.pub \ + -cache-cdn-enabled=false \ + -cache-s3-enabled=false \ + -http-key-file=/certs/server-key.pem \ + -http-cert-file=/certs/server-chain.pem +``` + + +```bash +docker run -p 8080:8080 -d \ + --name image-factory \ + --net=host \ + -v /dev:/dev \ + --privileged \ + -v $PWD/signing-key.key:/signing-key.key:ro \ + -v $PWD/cosign.pub:/cosign.pub:ro \ + -v $PWD/server-chain.pem:/certs/server-chain.pem:ro \ + -v $PWD/server-key.pem:/certs/server-key.pem:ro \ + -v /usr/local/share/ca-certificates/:/etc/ssl/certs:ro \ + ghcr.io/siderolabs/image-factory:v0.9.0 \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ + -cache-signing-key-path /signing-key.key \ + -container-signature-pubkey /cosign.pub \ + -cache-cdn-enabled=false \ + -cache-s3-enabled=false \ + -http-key-file=/certs/server-key.pem \ + -http-cert-file=/certs/server-chain.pem +``` + + -## Generating Images + + -To generate some image types (e.g. ISO) you will need to mount `/dev` into the image factory container and allow privileged operations. -Add these flags to your docker command. +If your image factory and container registry do not have certificates run the following command: ```bash - -v /dev:/dev --privileged +docker run -p 8080:8080 -d \ + --name image-factory \ + --net=host \ + -v /dev:/dev \ + --privileged \ + -v $PWD/signing-key.key:/signing-key.key:ro \ + -v $PWD/cosign.pub:/cosign.pub:ro \ + ghcr.io/siderolabs/image-factory:v0.9.0 \ + -insecure-image-registry \ + -insecure-installer-internal-repository \ + -insecure-schematic-service-repository \ + -external-url $FACTORY_URL \ + -image-registry $REGISTRY_ENDPOINT \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -installer-internal-repository $REGISTRY_ENDPOINT/siderolabs \ + -schematic-service-repository $REGISTRY_ENDPOINT/siderolabs/image-factory/schematic \ + -cache-repository $REGISTRY_ENDPOINT/siderolabs/cache \ + -cache-signing-key-path /signing-key.key \ + -container-signature-pubkey /cosign.pub \ + -cache-cdn-enabled=false \ + -cache-s3-enabled=false ``` + + + + + + +You should now be able to browse to https://registry.internal:8080 and view the Image Factory web interface. If your server or network has any firewall rules you may need to allow TCP traffic to the host. + +## Run Omni -After the image factory is running you can continue to the [Omni documentation for a self-hosted installation](../self-hosted/deploy-omni-on-prem). +After the image factory is running you can continue to the [Omni Airgapped documentation](../install-airgapped-omni).