diff --git a/gloo-mesh/core/byo-redis/2-6/default/README.md b/gloo-mesh/core/byo-redis/2-6/default/README.md deleted file mode 100644 index 40a9779a34..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/README.md +++ /dev/null @@ -1,2827 +0,0 @@ - - - - - -
Gloo Mesh Enterprise
- -#
Gloo Mesh Core (2.6.5)
- - - -## Table of Contents -* [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) -* [Lab 2 - Deploy own redis](#lab-2---deploy-own-redis-) -* [Lab 3 - Deploy own redis](#lab-3---deploy-own-redis-) -* [Lab 4 - Deploy own redis](#lab-4---deploy-own-redis-) -* [Lab 5 - Deploy and register Gloo Mesh](#lab-5---deploy-and-register-gloo-mesh-) -* [Lab 6 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-6---deploy-istio-using-gloo-mesh-lifecycle-manager-) -* [Lab 7 - Deploy the Bookinfo demo app](#lab-7---deploy-the-bookinfo-demo-app-) -* [Lab 8 - Deploy the httpbin demo app](#lab-8---deploy-the-httpbin-demo-app-) -* [Lab 9 - Expose the productpage service through a gateway using Istio resources](#lab-9---expose-the-productpage-service-through-a-gateway-using-istio-resources-) -* [Lab 10 - Introduction to Insights](#lab-10---introduction-to-insights-) -* [Lab 11 - Insights related to configuration errors](#lab-11---insights-related-to-configuration-errors-) -* [Lab 12 - Insights related to security issues](#lab-12---insights-related-to-security-issues-) -* [Lab 13 - Insights related to health issues](#lab-13---insights-related-to-health-issues-) - - - -## Introduction - -[Gloo Mesh Core](https://www.solo.io/products/gloo-mesh/) is a management plane that makes it easy to operate [Istio](https://istio.io). - -Gloo Mesh Core works with community [Istio](https://istio.io/) out of the box. -You get instant insights into your Istio environment through a custom dashboard. -Observability pipelines let you analyze many data sources that you already have. -You can even automate installing and upgrading Istio with the Gloo lifecycle manager, on one or many Kubernetes clusters deployed anywhere. - -But Gloo Mesh Core includes more than tooling to complement an existing Istio installation. -You can also replace community Istio with Solo's hardened Istio images. These images unlock enterprise-level support. -Later, you might choose to upgrade seamlessly to Gloo Mesh Enterprise for a full-stack service mesh and API gateway solution. -This approach lets you scale as you need more advanced routing and security features. - -### Istiosupport - -The Gloo Mesh Core subscription includes end-to-end Istio support: - -* Upstream feature development -* CI/CD-ready automated installation and upgrade -* End-to-end Istio support and CVE security patching -* Long-term n-4 version support with Solo images -* Special image builds for distroless and FIPS compliance -* 24x7 production support and one-hour Severity 1 SLA - -### Gloo Mesh Core overview - -Gloo Mesh Core provides many unique features, including: - -* Single pane of glass for operational management of Istio, including global observability -* Insights based on environment checks with corrective actions and best practices -* [Cilium](https://cilium.io/) support -* Seamless migration to full-stack service mesh - -### Want to learn more about Gloo Mesh Core? - -You can find more information about Gloo Mesh Core in the official documentation: - - - - -## Lab 1 - Deploy KinD clusters - - -Clone this repository and go to the directory where this `README.md` file is. - -Set the context environment variables: - -```bash -export MGMT=mgmt -export CLUSTER1=cluster1 -export CLUSTER2=cluster2 -``` - -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): - -```bash -./scripts/deploy-aws.sh 1 mgmt -./scripts/deploy-aws.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws.sh 3 cluster2 us-west us-west-2 -``` - -Then run the following commands to wait for all the Pods to be ready: - -```bash -./scripts/check.sh mgmt -./scripts/check.sh cluster1 -./scripts/check.sh cluster2 -``` - -**Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. - -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: - -``` -CURRENT NAME CLUSTER AUTHINFO NAMESPACE - cluster1 kind-cluster1 cluster1 -* cluster2 kind-cluster2 cluster2 - mgmt kind-mgmt kind-mgmt -``` - -Run the following command to make `mgmt` the current cluster. - -```bash -kubectl config use-context ${MGMT} -``` - - - - - -## Lab 2 - Deploy own redis - -The goal of this step is to simulate your own external Redis in a cloud instance, such as AWS ElastiCache, Redis Cloud or Google Cloud Memorystore. -Let's install Redis on the cluster. We'll disable persistence and set the username and password for the Redis server to `{ vars.redis_user}` and `{ vars.redis_password}` respectively. - -```bash -kubectl apply --context ${MGMT} -f - <passwordpassword ~* +@all - requirepass defaultuserpassword - save "" - appendonly no - maxmemory-policy noeviction - tls-port 6379 - port 0 - - tls-cert-file /etc/redis/tls/redis.crt - tls-key-file /etc/redis/tls/redis.key - tls-ca-cert-file /etc/redis/tls/ca.crt - - tls-auth-clients no - tls-replication yes - tls-cluster yes ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: redis - namespace: redis-ns ---- -apiVersion: v1 -kind: Service -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - ports: - - name: tcp - port: 6379 - selector: - app: redis ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - replicas: 1 - selector: - matchLabels: - app: redis - version: v1 - template: - metadata: - labels: - app: redis - version: v1 - spec: - serviceAccountName: redis - containers: - - image: redis:7.4.0 - name: redis - ports: - - name: tcp - containerPort: 6379 - volumeMounts: - - name: redis-config-volume - mountPath: /usr/local/etc/redis/redis.conf - subPath: redis.conf - - name: redis-tls - mountPath: /etc/redis/tls - readOnly: true - command: ["redis-server", "/usr/local/etc/redis/redis.conf"] - volumes: - - name: redis-config-volume - configMap: - name: redis-config - - name: redis-tls - secret: - secretName: redis-tls - optional: true -EOF -``` -Now we'll expose the Redis service only with TLS enabled: -```bash -openssl genrsa -out ca${MGMT}.key 2048 -openssl req -x509 -new -nodes -key ca${MGMT}.key -days 1825 -out ca${MGMT}.crt -subj "/CN=redis-ca" -openssl genrsa -out redis${MGMT}.key 2048 -openssl req -new -key redis${MGMT}.key -out redis${MGMT}.csr -config -< ./test.js -const helpers = require('./tests/chai-exec'); - -describe("Redis is healthy", () => { - it(`Redis service is present`, () => helpers.k8sObjectIsPresent({ context: `${process.env.MGMT}`, namespace: "redis-ns", k8sType: "service", k8sObj: "redis-ext" })); - it(`Redis pods are ready`, () => helpers.checkDeploymentsWithLabels({ context: `${process.env.MGMT}`, namespace: "redis-ns", labels: "app=redis", instances: 1 })); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-redis/tests/redis-healthy.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=30 --bail || exit 1 ---> - - - - -## Lab 3 - Deploy own redis - -The goal of this step is to simulate your own external Redis in a cloud instance, such as AWS ElastiCache, Redis Cloud or Google Cloud Memorystore. -Let's install Redis on the cluster. We'll disable persistence and set the username and password for the Redis server to `{ vars.redis_user}` and `{ vars.redis_password}` respectively. - -```bash -kubectl apply --context ${CLUSTER1} -f - <passwordpassword ~* +@all - requirepass defaultuserpassword - save "" - appendonly no - maxmemory-policy noeviction - tls-port 6379 - port 0 - - tls-cert-file /etc/redis/tls/redis.crt - tls-key-file /etc/redis/tls/redis.key - tls-ca-cert-file /etc/redis/tls/ca.crt - - tls-auth-clients no - tls-replication yes - tls-cluster yes ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: redis - namespace: redis-ns ---- -apiVersion: v1 -kind: Service -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - ports: - - name: tcp - port: 6379 - selector: - app: redis ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - replicas: 1 - selector: - matchLabels: - app: redis - version: v1 - template: - metadata: - labels: - app: redis - version: v1 - spec: - serviceAccountName: redis - containers: - - image: redis:7.4.0 - name: redis - ports: - - name: tcp - containerPort: 6379 - volumeMounts: - - name: redis-config-volume - mountPath: /usr/local/etc/redis/redis.conf - subPath: redis.conf - - name: redis-tls - mountPath: /etc/redis/tls - readOnly: true - command: ["redis-server", "/usr/local/etc/redis/redis.conf"] - volumes: - - name: redis-config-volume - configMap: - name: redis-config - - name: redis-tls - secret: - secretName: redis-tls - optional: true -EOF -``` -Now we'll expose the Redis service only with TLS enabled: -```bash -openssl genrsa -out ca${CLUSTER1}.key 2048 -openssl req -x509 -new -nodes -key ca${CLUSTER1}.key -days 1825 -out ca${CLUSTER1}.crt -subj "/CN=redis-ca" -openssl genrsa -out redis${CLUSTER1}.key 2048 -openssl req -new -key redis${CLUSTER1}.key -out redis${CLUSTER1}.csr -config -< ./test.js -const helpers = require('./tests/chai-exec'); - -describe("Redis is healthy", () => { - it(`Redis service is present`, () => helpers.k8sObjectIsPresent({ context: `${process.env.CLUSTER1}`, namespace: "redis-ns", k8sType: "service", k8sObj: "redis-ext" })); - it(`Redis pods are ready`, () => helpers.checkDeploymentsWithLabels({ context: `${process.env.CLUSTER1}`, namespace: "redis-ns", labels: "app=redis", instances: 1 })); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-redis/tests/redis-healthy.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=30 --bail || exit 1 ---> - - - - -## Lab 4 - Deploy own redis - -The goal of this step is to simulate your own external Redis in a cloud instance, such as AWS ElastiCache, Redis Cloud or Google Cloud Memorystore. -Let's install Redis on the cluster. We'll disable persistence and set the username and password for the Redis server to `{ vars.redis_user}` and `{ vars.redis_password}` respectively. - -```bash -kubectl apply --context ${CLUSTER2} -f - <passwordpassword ~* +@all - requirepass defaultuserpassword - save "" - appendonly no - maxmemory-policy noeviction - tls-port 6379 - port 0 - - tls-cert-file /etc/redis/tls/redis.crt - tls-key-file /etc/redis/tls/redis.key - tls-ca-cert-file /etc/redis/tls/ca.crt - - tls-auth-clients no - tls-replication yes - tls-cluster yes ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: redis - namespace: redis-ns ---- -apiVersion: v1 -kind: Service -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - ports: - - name: tcp - port: 6379 - selector: - app: redis ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: redis-ext - namespace: redis-ns - labels: - app: redis -spec: - replicas: 1 - selector: - matchLabels: - app: redis - version: v1 - template: - metadata: - labels: - app: redis - version: v1 - spec: - serviceAccountName: redis - containers: - - image: redis:7.4.0 - name: redis - ports: - - name: tcp - containerPort: 6379 - volumeMounts: - - name: redis-config-volume - mountPath: /usr/local/etc/redis/redis.conf - subPath: redis.conf - - name: redis-tls - mountPath: /etc/redis/tls - readOnly: true - command: ["redis-server", "/usr/local/etc/redis/redis.conf"] - volumes: - - name: redis-config-volume - configMap: - name: redis-config - - name: redis-tls - secret: - secretName: redis-tls - optional: true -EOF -``` -Now we'll expose the Redis service only with TLS enabled: -```bash -openssl genrsa -out ca${CLUSTER2}.key 2048 -openssl req -x509 -new -nodes -key ca${CLUSTER2}.key -days 1825 -out ca${CLUSTER2}.crt -subj "/CN=redis-ca" -openssl genrsa -out redis${CLUSTER2}.key 2048 -openssl req -new -key redis${CLUSTER2}.key -out redis${CLUSTER2}.csr -config -< ./test.js -const helpers = require('./tests/chai-exec'); - -describe("Redis is healthy", () => { - it(`Redis service is present`, () => helpers.k8sObjectIsPresent({ context: `${process.env.CLUSTER2}`, namespace: "redis-ns", k8sType: "service", k8sObj: "redis-ext" })); - it(`Redis pods are ready`, () => helpers.checkDeploymentsWithLabels({ context: `${process.env.CLUSTER2}`, namespace: "redis-ns", labels: "app=redis", instances: 1 })); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-redis/tests/redis-healthy.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=30 --bail || exit 1 ---> - - - - -## Lab 5 - Deploy and register Gloo Mesh -[VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") - - -Before we get started, let's install the `meshctl` CLI: - -```bash -export GLOO_MESH_VERSION=v2.6.5 -curl -sL https://run.solo.io/meshctl/install | sh - -export PATH=$HOME/.gloo-mesh/bin:$PATH -``` - -Run the following commands to deploy the Gloo Mesh management plane: -We also need to set a secret for the Redis deployment used to store the Gloo Mesh configuration: -```bash -kubectl apply --context ${MGMT} -f - < - -Then, you need to set the environment variable to tell the Gloo Mesh agents how to communicate with the management plane: - - - -```bash -export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 -export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) -export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 -export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 -``` - -Check that the variables have correct values: -``` -echo $HOST_GLOO_MESH -echo $ENDPOINT_GLOO_MESH -``` - - -Finally, you need to register the cluster(s). - - -Here is how you register the first one: - -```bash -kubectl apply --context ${MGMT} -f - < ca.crt -kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt -rm ca.crt - -kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token -kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token -rm token - -helm upgrade --install gloo-platform-crds gloo-platform-crds \ - --repo https://storage.googleapis.com/gloo-platform/helm-charts \ - --namespace gloo-mesh \ - --kube-context ${CLUSTER1} \ - --version 2.6.5 - -helm upgrade --install gloo-platform gloo-platform \ - --repo https://storage.googleapis.com/gloo-platform/helm-charts \ - --namespace gloo-mesh \ - --kube-context ${CLUSTER1} \ - --version 2.6.5 \ - -f -< ca.crt -kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt -rm ca.crt - -kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token -kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token -rm token - -helm upgrade --install gloo-platform-crds gloo-platform-crds \ - --repo https://storage.googleapis.com/gloo-platform/helm-charts \ - --namespace gloo-mesh \ - --kube-context ${CLUSTER2} \ - --version 2.6.5 - -helm upgrade --install gloo-platform gloo-platform \ - --repo https://storage.googleapis.com/gloo-platform/helm-charts \ - --namespace gloo-mesh \ - --kube-context ${CLUSTER2} \ - --version 2.6.5 \ - -f -< ./test.js -var chai = require('chai'); -var expect = chai.expect; -const helpers = require('./tests/chai-exec'); -describe("Cluster registration", () => { - it("cluster1 is registered", () => { - podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); - expect(command).to.contain("cluster1"); - }); - it("cluster2 is registered", () => { - podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); - expect(command).to.contain("cluster2"); - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - - - -## Lab 6 - Deploy Istio using Gloo Mesh Lifecycle Manager -[VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") - -We are going to deploy Istio using Gloo Mesh Lifecycle Manager. - -
- Install istioctl - -Install `istioctl` if not already installed as it will be useful in some of the labs that follow. - -```bash -curl -L https://istio.io/downloadIstio | sh - - -if [ -d "istio-"*/ ]; then - cd istio-*/ - export PATH=$PWD/bin:$PATH - cd .. -fi -``` - -That's it! -
- - -Let's create Kubernetes services for the gateways: - -```bash -kubectl --context ${CLUSTER1} create ns istio-gateways - -kubectl apply --context ${CLUSTER1} -f - < - - - - -```bash -export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" -export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" -``` - - - - - - -## Lab 7 - Deploy the Bookinfo demo app -[VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") - -We're going to deploy the bookinfo application to demonstrate several features of Gloo Mesh. - -You can find more information about this application [here](https://istio.io/latest/docs/examples/bookinfo/). - -Run the following commands to deploy the bookinfo application on `cluster1`: - -```bash -kubectl --context ${CLUSTER1} create ns bookinfo-frontends -kubectl --context ${CLUSTER1} create ns bookinfo-backends -kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite -kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite - -# Deploy the frontend bookinfo service in the bookinfo-frontends namespace -kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml - -# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 -kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ - -f data/steps/deploy-bookinfo/details-v1.yaml \ - -f data/steps/deploy-bookinfo/ratings-v1.yaml \ - -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml - -# Update the reviews service to display where it is coming from -kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} -kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} -``` - - - -You can check that the app is running using the following command: - -```shell -kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER1} -n bookinfo-backends get pods -``` - -Note that we deployed the `productpage` service in the `bookinfo-frontends` namespace and the other services in the `bookinfo-backends` namespace. - -And we deployed the `v1` and `v2` versions of the `reviews` microservice, not the `v3` version. - -Now, run the following commands to deploy the bookinfo application on `cluster2`: - -```bash -kubectl --context ${CLUSTER2} create ns bookinfo-frontends -kubectl --context ${CLUSTER2} create ns bookinfo-backends -kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite -kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite - -# Deploy the frontend bookinfo service in the bookinfo-frontends namespace -kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml -# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions -kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ - -f data/steps/deploy-bookinfo/details-v1.yaml \ - -f data/steps/deploy-bookinfo/ratings-v1.yaml \ - -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ - -f data/steps/deploy-bookinfo/reviews-v3.yaml -# Update the reviews service to display where it is coming from -kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} -kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} -kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} - -``` - - - -Confirm that `v1`, `v2` and `v3` of the `reviews` service are now running in the second cluster: - -```bash -kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods -``` - -As you can see, we deployed all three versions of the `reviews` microservice on this cluster. - - - - - -## Lab 8 - Deploy the httpbin demo app -[VIDEO LINK](https://youtu.be/w1xB-o_gHs0 "Video Link") - -We're going to deploy the httpbin application to demonstrate several features of Gloo Mesh. - -You can find more information about this application [here](http://httpbin.org/). - -Run the following commands to deploy the httpbin app on `cluster1`. The deployment will be called `not-in-mesh` and won't have the sidecar injected, because of the annotation `sidecar.istio.io/inject: "false"`. - -```bash -kubectl --context ${CLUSTER1} create ns httpbin -kubectl apply --context ${CLUSTER1} -f - </dev/null -do - sleep 1 - echo -n . -done" -echo ---> -``` -You can follow the progress using the following command: - -```bash -kubectl --context ${CLUSTER1} -n httpbin get pods -``` - -```,nocopy -NAME READY STATUS RESTARTS AGE -in-mesh-5d9d9549b5-qrdgd 2/2 Running 0 11s -not-in-mesh-5c64bb49cd-m9kwm 1/1 Running 0 11s -``` - - - - -## Lab 9 - Expose the productpage service through a gateway using Istio resources - -In this step, we're going to expose the `productpage` service through the Ingress Gateway using Istio resources. - -First, you need to create a `Gateway` object to configure the Istio Ingress Gateway in cluster1 to listen to incoming requests. - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -const helpers = require('./tests/chai-http'); - -describe("productpage is available (HTTP)", () => { - it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); -}) -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -Now, let's secure the access through TLS. -Let's first create a private key and a self-signed certificate: - -```bash -openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ - -keyout tls.key -out tls.crt -subj "/CN=*" -``` - -Then, you have to store them in a Kubernetes secret running the following commands: - -```bash -kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ ---from-file=tls.key=tls.key \ ---from-file=tls.crt=tls.crt - -kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ ---from-file=tls.key=tls.key \ ---from-file=tls.crt=tls.crt -``` - -Finally, you need to update the `Gateway` to use this secret: - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -const helpers = require('./tests/chai-http'); - -describe("productpage is available (HTTPS)", () => { - it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); -}) -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - - - - - - -## Lab 10 - Introduction to Insights - - - -Gloo Mesh Insights are generated using configuration, logs and metrics collected by the Gloo Mesh agents on the different clusters. - -They are grouped in different categories: -- best practices -- configuration -- health -- security -- ... - -If you think some insights aren't relevant or too noisy, you can suppress them. - - - - - -For example, right now we have the following insight: - -![BP0002 insight](images/steps/insights-intro/bp0002.png) - -Note the code of this insight: BP0002 - -If you don't plan to update your `Gateway` objects to follow the suggested best practice, you can create the following object to suppress it. - -```bash -kubectl apply --context ${MGMT} -f - < ./test.js -const helpersHttp = require('./tests/chai-http'); -const InsightsPage = require('./tests/pages/insights-page'); -const constants = require('./tests/pages/constants'); -const puppeteer = require('puppeteer'); -const { enhanceBrowser } = require('./tests/utils/enhance-browser'); -var chai = require('chai'); -var expect = chai.expect; - -afterEach(function (done) { - if (this.currentTest.currentRetry() > 0) { - process.stdout.write("."); - setTimeout(done, 4000); - } else { - done(); - } -}); - -describe("Insights UI", function() { - let browser; - let insightsPage; - - // Use Mocha's 'before' hook to set up Puppeteer - beforeEach(async function() { - browser = await puppeteer.launch({ - headless: "new", - ignoreHTTPSErrors: true, - args: ['--no-sandbox', '--disable-setuid-sandbox'], - }); - browser = enhanceBrowser(browser, this.currentTest.title); - let page = await browser.newPage(); - await page.setViewport({ width: 1500, height: 1000 }); - insightsPage = new InsightsPage(page); - }); - - // Use Mocha's 'after' hook to close Puppeteer - afterEach(async function() { - await browser.close(); - }); - - it("should not display BP0002 in the UI", async () => { - await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); - await insightsPage.selectClusters(['cluster1', 'cluster2']); - await insightsPage.selectInsightTypes([constants.InsightType.BP]); - const data = await insightsPage.getTableDataRows() - expect(data.some(item => item.includes("is not namespaced"))).to.be.false; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -The corresponding insight isn't displayed anymore in the UI. - -The UI can be used to display all the current insights, but metrics are also produced when insights are triggered. - -It allows you to have an historical view of the insights. - -Run the following command to see the insights metrics: - -```shell -pod=$(kubectl --context ${MGMT} -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}') -kubectl --context ${MGMT} -n gloo-mesh debug -q -i ${pod} --image=curlimages/curl -- curl -s "http://localhost:9090/api/v1/query?query=gloo_mesh_insights" | jq -r '.data.result[].metric.code' -``` - -It will list the current insights in Prometheus: - -```,nocopy -BP0001 -SYS0004 -SYS0004 -SYS0006 -... -``` - -Note that some of them are suppressed by default. They are used internally. - -As this is a gauge, you can use it to display historical data. - -You can get the details about a specific entry in the metrics. - -```shell -pod=$(kubectl --context ${MGMT} -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}') -kubectl --context ${MGMT} -n gloo-mesh debug -q -i ${pod} --image=curlimages/curl -- curl -s "http://localhost:9090/api/v1/query?query=gloo_mesh_insights" | jq -r '.data.result[]|select(.metric.code=="BP0001")' -``` - -```json,nocopy -{ - "metric": { - "__name__": "gloo_mesh_insights", - "app": "gloo-mesh-mgmt-server", - "category": "BP", - "cluster": "cluster1", - "code": "BP0001", - "collector_pod": "gloo-telemetry-collector-agent-pdptz", - "component": "agent-collector", - "controller_revision_hash": "5475869bf", - "key": "0001", - "namespace": "gloo-mesh", - "pod": "gloo-mesh-mgmt-server-7bc5478744-pqd9m", - "pod_template_generation": "1", - "severity": "WARNING", - "target": "bookinfo.bookinfo-frontends.value:\"networking.istio.io\".value:\"VirtualService\".cluster1", - "target_type": "resource" - }, - "value": [ - 1702643487.08, - "1" - ] -} -``` - -The `target` value can be read: the `bookinfo` object of kind `VirtualService` (with the apiVersion `networking.istio.io`) in the `bookinfo-frontends` namespace. - -Let's have a look at another insight. - -![BP0001 insight](images/steps/insights-intro/bp0001.png) - -The resolution step is telling us the following: - -> _In the spec.exportTo field of your VirtualService Istio resource, list namespaces to export the VirtualService to. When you export a VirtualService, only sidecars and gateways that exist in the namespaces that you specify can use it. Note that the value "." makes the VirtualService available only in the same namespace that the VirtualService is defined in, and "*" exports the VirtualService to all namespaces._ - -You can update the `VirtualService` to add the `exportTo` field as suggested: - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -const helpersHttp = require('./tests/chai-http'); -const InsightsPage = require('./tests/pages/insights-page'); -const constants = require('./tests/pages/constants'); -const puppeteer = require('puppeteer'); -const { enhanceBrowser } = require('./tests/utils/enhance-browser'); -var chai = require('chai'); -var expect = chai.expect; - -afterEach(function (done) { - if (this.currentTest.currentRetry() > 0) { - process.stdout.write("."); - setTimeout(done, 4000); - } else { - done(); - } -}); - -describe("Insights UI", function() { - let browser; - let insightsPage; - - // Use Mocha's 'before' hook to set up Puppeteer - beforeEach(async function() { - browser = await puppeteer.launch({ - headless: "new", - ignoreHTTPSErrors: true, - args: ['--no-sandbox', '--disable-setuid-sandbox'], - }); - browser = enhanceBrowser(browser, this.currentTest.title); - let page = await browser.newPage(); - await page.setViewport({ width: 1500, height: 1000 }); - insightsPage = new InsightsPage(page); - }); - - // Use Mocha's 'after' hook to close Puppeteer - afterEach(async function() { - await browser.close(); - }); - - it("should not display BP0001 in the UI", async () => { - await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); - await insightsPage.selectClusters(['cluster1', 'cluster2']); - await insightsPage.selectInsightTypes([constants.InsightType.BP]); - const data = await insightsPage.getTableDataRows() - expect(data.some(item => item.includes("is not namespaced"))).to.be.false; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -The UI shouldn't display this insight anymore. - - - -## Lab 11 - Insights related to configuration errors - -In this lab, we're going to focus on insights related to configuration errors. - -Let's create a new `VirtualService` to send all the requests from the `productpage` service to only the `v1` version of the `reviews` service. - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -var chai = require('chai'); -var expect = chai.expect; -const helpers = require('./tests/chai-exec'); - -describe("Insight generation", () => { - it("Insight CFG0001 has been triggered in the source (MGMT)", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); - helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); - const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; - const match = command.match(regex); - expect(match).to.not.be.null; - }); - - it("Insight CFG0001 has been triggered in PROMETHEUS", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); - let result = JSON.parse(command); - let active = false; - result.data.result.forEach(item => { - if(item.metric.code == "CFG0001" && item.value[1] > 0) { - active = true - } - }); - expect(active).to.be.true; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -If you refresh the `productpage` tab, you'll see the error `Sorry, product reviews are currently unavailable for this book.`. - -And if you go to the Gloo Mesh UI, you'll see an insight has been generated: - -![CFG0001 insight](images/steps/insights-config/cfg0001.png) - -That's because you haven't created a `DestinationRule` to define the `v1` subset. - -Let's solve the issue. - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -var chai = require('chai'); -var expect = chai.expect; -const helpers = require('./tests/chai-exec'); - -describe("Insight generation", () => { - it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); - helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); - const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; - const match = command.match(regex); - expect(match).to.be.null; - }); - - it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); - let result = JSON.parse(command); - let active = false; - result.data.result.forEach(item => { - if(item.metric.code == "CFG0001" && item.value[1] > 0) { - active = true - } - }); - expect(active).to.be.false; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -Let's delete the objects we've created: - -```bash -kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews -kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews -``` - - - -## Lab 12 - Insights related to security issues - -In this lab, we're going to focus on insights related to security issues. - -Let's create a new `AuthorizationPolicy` to deny requests to the `reviews` service sent by a service in the `httpbin` namespace. - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -var chai = require('chai'); -var expect = chai.expect; -const helpers = require('./tests/chai-exec'); - -describe("Insight generation", () => { - it("Insight SEC0008 has been triggered in the source (MGMT)", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); - helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); - const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; - const match = command.match(regex); - expect(match).to.not.be.null; - }); - - it("Insight SEC0008 has been triggered in PROMETHEUS", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); - let result = JSON.parse(command); - let active = false; - result.data.result.forEach(item => { - if(item.metric.code == "SEC0008" && item.value[1] > 0) { - active = true - } - }); - expect(active).to.be.true; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -You can fix the issue by creating a `PeerAuthentication` object to enforce mTLS globally: - -```bash -kubectl apply --context ${CLUSTER1} -f - < ./test.js -var chai = require('chai'); -var expect = chai.expect; -const helpers = require('./tests/chai-exec'); - -describe("Insight generation", () => { - it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); - helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); - const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; - const match = command.match(regex); - expect(match).to.be.null; - }); - - it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { - helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); - command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); - let result = JSON.parse(command); - let active = false; - result.data.result.forEach(item => { - if(item.metric.code == "SEC0008" && item.value[1] > 0) { - active = true - } - }); - expect(active).to.be.false; - }); -}); -EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || exit 1 ---> - -Let's delete the objects we've created: - -```bash -kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews -kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default -``` - - - -## Lab 13 - Insights related to health issues - - -This step shows Gloo Mesh Core insights about Cilium. Hence, it is skipped when Cilium is not installed. - - - diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/.gitkeep b/gloo-mesh/core/byo-redis/2-6/default/data/.gitkeep deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/details-v1.yaml b/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/details-v1.yaml deleted file mode 100644 index 6bae76cb17..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/details-v1.yaml +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -################################################################################################## -# Details service -################################################################################################## -apiVersion: v1 -kind: Service -metadata: - name: details - labels: - app: details - service: details -spec: - ports: - - port: 9080 - name: http - selector: - app: details ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: bookinfo-details - labels: - account: details ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: details-v1 - labels: - app: details - version: v1 -spec: - replicas: 1 - selector: - matchLabels: - app: details - version: v1 - template: - metadata: - labels: - app: details - version: v1 - spec: - serviceAccountName: bookinfo-details - containers: - - name: details - image: docker.io/istio/examples-bookinfo-details-v1:1.20.2 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9080 ---- \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/productpage-v1.yaml b/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/productpage-v1.yaml deleted file mode 100644 index 5c099e7432..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/productpage-v1.yaml +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -################################################################################################## -# Productpage services -################################################################################################## -apiVersion: v1 -kind: Service -metadata: - name: productpage - labels: - app: productpage - service: productpage -spec: - ports: - - port: 9080 - name: http - selector: - app: productpage ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: bookinfo-productpage - labels: - account: productpage ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: productpage-v1 - labels: - app: productpage - version: v1 -spec: - replicas: 1 - selector: - matchLabels: - app: productpage - version: v1 - template: - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9080" - prometheus.io/path: "/metrics" - labels: - app: productpage - version: v1 - spec: - serviceAccountName: bookinfo-productpage - containers: - - name: productpage - image: docker.io/istio/examples-bookinfo-productpage-v1:1.20.2 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9080 - volumeMounts: - - name: tmp - mountPath: /tmp - env: - - name: DETAILS_HOSTNAME - value: details.bookinfo-backends.svc.cluster.local - - name: REVIEWS_HOSTNAME - value: reviews.bookinfo-backends.svc.cluster.local - volumes: - - name: tmp - emptyDir: {} ---- \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/ratings-v1.yaml b/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/ratings-v1.yaml deleted file mode 100644 index ff4971cb34..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/ratings-v1.yaml +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -################################################################################################## -# Ratings service -################################################################################################## -apiVersion: v1 -kind: Service -metadata: - name: ratings - labels: - app: ratings - service: ratings -spec: - ports: - - port: 9080 - name: http - selector: - app: ratings ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: bookinfo-ratings - labels: - account: ratings ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: ratings-v1 - labels: - app: ratings - version: v1 -spec: - replicas: 1 - selector: - matchLabels: - app: ratings - version: v1 - template: - metadata: - labels: - app: ratings - version: v1 - spec: - serviceAccountName: bookinfo-ratings - containers: - - name: ratings - image: docker.io/istio/examples-bookinfo-ratings-v1:1.20.2 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9080 ---- \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v1-v2.yaml b/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v1-v2.yaml deleted file mode 100644 index 4c28e57234..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v1-v2.yaml +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -################################################################################################## -# Reviews service -################################################################################################## -apiVersion: v1 -kind: Service -metadata: - name: reviews - labels: - app: reviews - service: reviews -spec: - ports: - - port: 9080 - name: http - selector: - app: reviews ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: bookinfo-reviews - labels: - account: reviews ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: reviews-v1 - labels: - app: reviews - version: v1 -spec: - replicas: 1 - selector: - matchLabels: - app: reviews - version: v1 - template: - metadata: - labels: - app: reviews - version: v1 - spec: - serviceAccountName: bookinfo-reviews - containers: - - name: reviews - image: docker.io/istio/examples-bookinfo-reviews-v1:1.20.2 - imagePullPolicy: IfNotPresent - env: - - name: LOG_DIR - value: "/tmp/logs" - ports: - - containerPort: 9080 - volumeMounts: - - name: tmp - mountPath: /tmp - - name: wlp-output - mountPath: /opt/ibm/wlp/output - volumes: - - name: wlp-output - emptyDir: {} - - name: tmp - emptyDir: {} ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: reviews-v2 - labels: - app: reviews - version: v2 -spec: - replicas: 1 - selector: - matchLabels: - app: reviews - version: v2 - template: - metadata: - labels: - app: reviews - version: v2 - spec: - serviceAccountName: bookinfo-reviews - containers: - - name: reviews - image: docker.io/istio/examples-bookinfo-reviews-v2:1.20.2 - imagePullPolicy: IfNotPresent - env: - - name: LOG_DIR - value: "/tmp/logs" - ports: - - containerPort: 9080 - volumeMounts: - - name: tmp - mountPath: /tmp - - name: wlp-output - mountPath: /opt/ibm/wlp/output - volumes: - - name: wlp-output - emptyDir: {} - - name: tmp - emptyDir: {} ---- \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v3.yaml b/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v3.yaml deleted file mode 100644 index b239c02688..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/data/steps/deploy-bookinfo/reviews-v3.yaml +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -################################################################################################## -# Reviews service -################################################################################################## -apiVersion: apps/v1 -kind: Deployment -metadata: - name: reviews-v3 - labels: - app: reviews - version: v3 -spec: - replicas: 1 - selector: - matchLabels: - app: reviews - version: v3 - template: - metadata: - labels: - app: reviews - version: v3 - spec: - serviceAccountName: bookinfo-reviews - containers: - - name: reviews - image: docker.io/istio/examples-bookinfo-reviews-v3:1.20.2 - imagePullPolicy: IfNotPresent - env: - - name: LOG_DIR - value: "/tmp/logs" - ports: - - containerPort: 9080 - volumeMounts: - - name: tmp - mountPath: /tmp - - name: wlp-output - mountPath: /opt/ibm/wlp/output - volumes: - - name: wlp-output - emptyDir: {} - - name: tmp - emptyDir: {} ---- \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/.gitkeep b/gloo-mesh/core/byo-redis/2-6/default/images/.gitkeep deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-gateway.png b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-gateway.png deleted file mode 100644 index a2e752c3ee..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-gateway.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.gif b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.gif deleted file mode 100644 index 4422a60d2a..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.gif and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.png b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.png deleted file mode 100644 index bb439fb58e..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh-graph.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh.png b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh.png deleted file mode 100644 index 248dc3c9ef..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-mesh.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-network.png b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-network.png deleted file mode 100644 index 27cb096bf7..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-network.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-products.png b/gloo-mesh/core/byo-redis/2-6/default/images/gloo-products.png deleted file mode 100644 index 69e4ef56ff..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/gloo-products.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/bookinfo-working.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/bookinfo-working.png deleted file mode 100644 index e91cda0b78..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/bookinfo-working.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/initial-setup.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/initial-setup.png deleted file mode 100644 index 6808fffb22..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/deploy-bookinfo/initial-setup.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-config/cfg0001.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-config/cfg0001.png deleted file mode 100644 index 3afef5ca38..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-config/cfg0001.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-health/hlt0011.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-health/hlt0011.png deleted file mode 100644 index 0b7abd69e0..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-health/hlt0011.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0001.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0001.png deleted file mode 100644 index efbb6cdc19..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0001.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0002.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0002.png deleted file mode 100644 index a0d589351c..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-intro/bp0002.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-security/sec0008.png b/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-security/sec0008.png deleted file mode 100644 index 976d866c34..0000000000 Binary files a/gloo-mesh/core/byo-redis/2-6/default/images/steps/insights-security/sec0008.png and /dev/null differ diff --git a/gloo-mesh/core/byo-redis/2-6/default/partials/calculate-endpoints.liquid b/gloo-mesh/core/byo-redis/2-6/default/partials/calculate-endpoints.liquid deleted file mode 100644 index e7bd4df90d..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/partials/calculate-endpoints.liquid +++ /dev/null @@ -1,58 +0,0 @@ -{%- assign fqdn_httpbin = vars.httpbin_fqdn | default: "httpbin.example.com" %} -{%- assign fqdn_bookinfo = vars.bookinfo_fqdn | default: "bookinfo.example.com" %} -{%- assign fqdn_portal = vars.portal_fqdn | default: "portal.example.com" %} -{%- assign fqdn_grpcbin = vars.grpcbin_fqdn | default: "grpcbin.example.com" %} -{%- assign fqdn_backstage = vars.backstage_fqdn | default: "backstage.example.com" %} -{%- assign fqdn_cluster1_httpbin = "cluster1-" | append: fqdn_httpbin %} -{%- assign fqdn_cluster2_httpbin = "cluster2-" | append: fqdn_httpbin %} -{%- assign fqdn_cluster1_bookinfo = "cluster1-" | append: fqdn_bookinfo %} -{%- assign fqdn_cluster2_bookinfo = "cluster2-" | append: fqdn_bookinfo %} -{%- assign fqdn_cluster1_portal = "cluster1-" | append: fqdn_portal %} -{%- assign fqdn_cluster2_portal = "cluster2-" | append: fqdn_portal %} -{%- assign fqdn_cluster1_grpcbin = "cluster1-" | append: fqdn_grpcbin %} -{%- assign fqdn_cluster2_grpcbin = "cluster2-" | append: fqdn_grpcbin %} -{%- assign fqdn_cluster1_backstage = "cluster1-" | append: fqdn_backstage %} -{%- assign fqdn_cluster2_backstage = "cluster2-" | append: fqdn_backstage %} -{%- if vars.node_port or vars.cluster1.node_port %} -{%- assign endpoint_http_gw_cluster1_httpbin = fqdn_cluster1_httpbin | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster1_httpbin = fqdn_cluster1_httpbin | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- assign endpoint_http_gw_cluster1_bookinfo = fqdn_cluster1_bookinfo | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster1_bookinfo = fqdn_cluster1_bookinfo | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- assign endpoint_http_gw_cluster1_portal = fqdn_cluster1_portal | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster1_portal = fqdn_cluster1_portal | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- assign endpoint_http_gw_cluster1_grpcbin = fqdn_cluster1_grpcbin | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster1_grpcbin = fqdn_cluster1_grpcbin | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- assign endpoint_http_gw_cluster1_backstage = fqdn_cluster1_backstage | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster1_backstage = fqdn_cluster1_backstage | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- if vars.node_port or vars.cluster2.node_port %} -{%- assign endpoint_http_gw_cluster2_httpbin = fqdn_cluster2_httpbin | append: ":${NODEPORT_CLUSTER2_HTTP}" %} -{%- assign endpoint_https_gw_cluster2_httpbin = fqdn_cluster2_httpbin | append: ":${NODEPORT_CLUSTER2_HTTPS}" %} -{%- assign endpoint_http_gw_cluster2_bookinfo = fqdn_cluster2_bookinfo | append: ":${NODEPORT_CLUSTER2_HTTP}')" %} -{%- assign endpoint_https_gw_cluster2_bookinfo = fqdn_cluster2_bookinfo | append: ":${NODEPORT_CLUSTER2_HTTPS}" %} -{%- assign endpoint_http_gw_cluster2_portal = fqdn_cluster2_portal | append: ":${NODEPORT_CLUSTER2_HTTP}')" %} -{%- assign endpoint_https_gw_cluster2_portal = fqdn_cluster2_portal | append: ":${NODEPORT_CLUSTER2_HTTPS}" %} -{%- assign endpoint_http_gw_cluster2_grpcbin = fqdn_cluster2_grpcbin | append: ":${NODEPORT_CLUSTER1_HTTP}" %} -{%- assign endpoint_https_gw_cluster2_grpcbin = fqdn_cluster2_grpcbin | append: ":${NODEPORT_CLUSTER1_HTTPS}" %} -{%- assign endpoint_http_gw_cluster2_backstage = fqdn_cluster2_backstage | append: ":${NODEPORT_CLUSTER2_HTTP}')" %} -{%- assign endpoint_https_gw_cluster2_backstage = fqdn_cluster2_backstage | append: ":${NODEPORT_CLUSTER2_HTTPS}" %} -{%- endif %}{% comment %}cluster2 nodeport{% endcomment %} -{%- else %} -{%- assign endpoint_http_gw_cluster1_httpbin = fqdn_cluster1_httpbin %} -{%- assign endpoint_https_gw_cluster1_httpbin = fqdn_cluster1_httpbin %} -{%- assign endpoint_http_gw_cluster1_bookinfo = fqdn_cluster1_bookinfo %} -{%- assign endpoint_https_gw_cluster1_bookinfo = fqdn_cluster1_bookinfo %} -{%- assign endpoint_http_gw_cluster1_portal = fqdn_cluster1_portal %} -{%- assign endpoint_https_gw_cluster1_portal = fqdn_cluster1_portal %} -{%- assign endpoint_http_gw_cluster1_backstage = fqdn_cluster1_backstage %} -{%- assign endpoint_https_gw_cluster1_backstage = fqdn_cluster1_backstage %} -{%- assign endpoint_http_gw_cluster2_httpbin = fqdn_cluster2_httpbin %} -{%- assign endpoint_https_gw_cluster2_httpbin = fqdn_cluster2_httpbin %} -{%- assign endpoint_http_gw_cluster2_bookinfo = fqdn_cluster2_bookinfo %} -{%- assign endpoint_https_gw_cluster2_bookinfo = fqdn_cluster2_bookinfo %} -{%- assign endpoint_http_gw_cluster2_portal = fqdn_cluster2_portal %} -{%- assign endpoint_https_gw_cluster2_portal = fqdn_cluster2_portal %} -{%- assign endpoint_http_gw_cluster2_grpcbin = fqdn_cluster2_grpcbin %} -{%- assign endpoint_https_gw_cluster2_grpcbin = fqdn_cluster2_grpcbin %} -{%- assign endpoint_http_gw_cluster2_backstage = fqdn_cluster2_backstage %} -{%- assign endpoint_https_gw_cluster2_backstage = fqdn_cluster2_backstage %} -{%- endif %} \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/assert.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/assert.sh deleted file mode 100755 index 75ba95ac90..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/assert.sh +++ /dev/null @@ -1,252 +0,0 @@ -#!/usr/bin/env bash - -##################################################################### -## -## title: Assert Extension -## -## description: -## Assert extension of shell (bash, ...) -## with the common assert functions -## Function list based on: -## http://junit.sourceforge.net/javadoc/org/junit/Assert.html -## Log methods : inspired by -## - https://natelandau.com/bash-scripting-utilities/ -## author: Mark Torok -## -## date: 07. Dec. 2016 -## -## license: MIT -## -##################################################################### - -if command -v tput &>/dev/null && tty -s; then - RED=$(tput setaf 1) - GREEN=$(tput setaf 2) - MAGENTA=$(tput setaf 5) - NORMAL=$(tput sgr0) - BOLD=$(tput bold) -else - RED=$(echo -en "\e[31m") - GREEN=$(echo -en "\e[32m") - MAGENTA=$(echo -en "\e[35m") - NORMAL=$(echo -en "\e[00m") - BOLD=$(echo -en "\e[01m") -fi - -log_header() { - printf "\n${BOLD}${MAGENTA}========== %s ==========${NORMAL}\n" "$@" >&2 -} - -log_success() { - printf "${GREEN}✔ %s${NORMAL}\n" "$@" >&2 -} - -log_failure() { - printf "${RED}✖ %s${NORMAL}\n" "$@" >&2 - file=.test-error.log - echo "$@" >> $file - echo "#############################################" >> $file - echo "#############################################" >> $file -} - - -assert_eq() { - local expected="$1" - local actual="$2" - local msg="${3-}" - - if [ "$expected" == "$actual" ]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$expected == $actual :: $msg" || true - return 1 - fi -} - -assert_not_eq() { - local expected="$1" - local actual="$2" - local msg="${3-}" - - if [ ! "$expected" == "$actual" ]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$expected != $actual :: $msg" || true - return 1 - fi -} - -assert_true() { - local actual="$1" - local msg="${2-}" - - assert_eq true "$actual" "$msg" - return "$?" -} - -assert_false() { - local actual="$1" - local msg="${2-}" - - assert_eq false "$actual" "$msg" - return "$?" -} - -assert_array_eq() { - - declare -a expected=("${!1-}") - # echo "AAE ${expected[@]}" - - declare -a actual=("${!2}") - # echo "AAE ${actual[@]}" - - local msg="${3-}" - - local return_code=0 - if [ ! "${#expected[@]}" == "${#actual[@]}" ]; then - return_code=1 - fi - - local i - for (( i=1; i < ${#expected[@]} + 1; i+=1 )); do - if [ ! "${expected[$i-1]}" == "${actual[$i-1]}" ]; then - return_code=1 - break - fi - done - - if [ "$return_code" == 1 ]; then - [ "${#msg}" -gt 0 ] && log_failure "(${expected[*]}) != (${actual[*]}) :: $msg" || true - fi - - return "$return_code" -} - -assert_array_not_eq() { - - declare -a expected=("${!1-}") - declare -a actual=("${!2}") - - local msg="${3-}" - - local return_code=1 - if [ ! "${#expected[@]}" == "${#actual[@]}" ]; then - return_code=0 - fi - - local i - for (( i=1; i < ${#expected[@]} + 1; i+=1 )); do - if [ ! "${expected[$i-1]}" == "${actual[$i-1]}" ]; then - return_code=0 - break - fi - done - - if [ "$return_code" == 1 ]; then - [ "${#msg}" -gt 0 ] && log_failure "(${expected[*]}) == (${actual[*]}) :: $msg" || true - fi - - return "$return_code" -} - -assert_empty() { - local actual=$1 - local msg="${2-}" - - assert_eq "" "$actual" "$msg" - return "$?" -} - -assert_not_empty() { - local actual=$1 - local msg="${2-}" - - assert_not_eq "" "$actual" "$msg" - return "$?" -} - -assert_contain() { - local haystack="$1" - local needle="${2-}" - local msg="${3-}" - - if [ -z "${needle:+x}" ]; then - return 0; - fi - - if [ -z "${haystack##*$needle*}" ]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$haystack doesn't contain $needle :: $msg" || true - return 1 - fi -} - -assert_not_contain() { - local haystack="$1" - local needle="${2-}" - local msg="${3-}" - - if [ -z "${needle:+x}" ]; then - return 0; - fi - - if [ "${haystack##*$needle*}" ]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$haystack contains $needle :: $msg" || true - return 1 - fi -} - -assert_gt() { - local first="$1" - local second="$2" - local msg="${3-}" - - if [[ "$first" -gt "$second" ]]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$first > $second :: $msg" || true - return 1 - fi -} - -assert_ge() { - local first="$1" - local second="$2" - local msg="${3-}" - - if [[ "$first" -ge "$second" ]]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$first >= $second :: $msg" || true - return 1 - fi -} - -assert_lt() { - local first="$1" - local second="$2" - local msg="${3-}" - - if [[ "$first" -lt "$second" ]]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$first < $second :: $msg" || true - return 1 - fi -} - -assert_le() { - local first="$1" - local second="$2" - local msg="${3-}" - - if [[ "$first" -le "$second" ]]; then - return 0 - else - [ "${#msg}" -gt 0 ] && log_failure "$first <= $second :: $msg" || true - return 1 - fi -} \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/check.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/check.sh deleted file mode 100755 index fa52484b28..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/check.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env bash - -printf "Waiting for all the kube-system pods to become ready in context $1" -until [ $(kubectl --context $1 -n kube-system get pods -o jsonpath='{range .items[*].status.containerStatuses[*]}{.ready}{"\n"}{end}' | grep false -c) -eq 0 ]; do - printf "%s" "." - sleep 1 -done -printf "\n kube-system pods are now ready \n" - -printf "Waiting for all the metallb-system pods to become ready in context $1" -until [ $(kubectl --context $1 -n metallb-system get pods -o jsonpath='{range .items[*].status.containerStatuses[*]}{.ready}{"\n"}{end}' | grep false -c) -eq 0 ]; do - printf "%s" "." - sleep 1 -done -printf "\n metallb-system pods are now ready \n" - diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/configure-domain-rewrite.sh deleted file mode 100755 index be6dbd6d8b..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/configure-domain-rewrite.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/usr/bin/env bash - -set -x # Debug mode to show commands -set -e # Stop on error - -hostname="$1" -new_hostname="$2" - -## Install CoreDNS if not installed -if ! command -v coredns &> /dev/null; then - wget https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz - tar xvf coredns_1.8.3_linux_amd64.tgz - sudo mv coredns /usr/local/bin/ - sudo rm -rf coredns_1.8.3_linux_amd64.tgz -fi - -name="$(echo {a..z} | tr -d ' ' | fold -w1 | shuf | head -n3 | tr -d '\n')" -tld=$(echo {a..z} | tr -d ' ' | fold -w1 | shuf | head -n2 | tr -d '\n') -random_domain="$name.$tld" -CONFIG_FILE=~/coredns.conf - -## Update coredns.conf with a rewrite rule -if grep -q "rewrite name $hostname" $CONFIG_FILE; then - sed -i "s/rewrite name $hostname.*/rewrite name $hostname $new_hostname/" $CONFIG_FILE -else - if [ ! -f "$CONFIG_FILE" ]; then - # Create a new config file if it doesn't exist - cat < $CONFIG_FILE -.:5300 { - forward . 8.8.8.8 8.8.4.4 - log -} -EOF - fi - # Append a new rewrite rule - sed -i "/log/i \ rewrite name $hostname $new_hostname" $CONFIG_FILE -fi - -# Ensure the random domain rewrite rule is always present -if grep -q "rewrite name .* httpbin.org" $CONFIG_FILE; then - sed -i "s/rewrite name .* httpbin.org/rewrite name $random_domain httpbin.org/" $CONFIG_FILE -else - sed -i "/log/i \ rewrite name $random_domain httpbin.org" $CONFIG_FILE -fi - -cat $CONFIG_FILE # Display the config for debugging - -## Check if CoreDNS is running and kill it -if pgrep coredns; then - pkill coredns - # wait for the process to be terminated - sleep 10 -fi - -## Restart CoreDNS with the updated config -nohup coredns -conf $CONFIG_FILE &> /dev/null & - -## Configure the system resolver -sudo tee /etc/systemd/resolved.conf > /dev/null < /dev/null || ! command -v jq &> /dev/null; then - echo "Both openssl and jq are required to run this script." - exit 1 -fi - -PRIVATE_KEY_PATH=$1 -SUBJECT=$2 -TEAM=$3 -LLM=$4 -MODEL=$5 - -if [ -z "$PRIVATE_KEY_PATH" ] || [ -z "$SUBJECT" ] || [ -z "$TEAM" ] || [ -z "$LLM" ] || [ -z "$MODEL" ]; then - echo "Usage: $0 " - exit 1 -fi - - -if [[ "$LLM" != "openai" && "$LLM" != "mistralai" ]]; then - echo "LLM must be either 'openai' or 'mistralai'." - exit 1 -fi - -HEADER='{"alg":"RS256","typ":"JWT"}' -PAYLOAD=$(jq -n --arg sub "$SUBJECT" --arg team "$TEAM" --arg llm "$LLM" --arg model "$MODEL" \ -'{ - "iss": "solo.io", - "org": "solo.io", - "sub": $sub, - "team": $team, - "llms": { - ($llm): [$model] - } -}') - -# Encode Base64URL function -base64url_encode() { - openssl base64 -e | tr -d '=' | tr '/+' '_-' | tr -d '\n' -} - -# Create JWT Header -HEADER_BASE64=$(echo -n $HEADER | base64url_encode) - -# Create JWT Payload -PAYLOAD_BASE64=$(echo -n $PAYLOAD | base64url_encode) - -# Create JWT Signature -SIGNING_INPUT="${HEADER_BASE64}.${PAYLOAD_BASE64}" -SIGNATURE=$(echo -n $SIGNING_INPUT | openssl dgst -sha256 -sign $PRIVATE_KEY_PATH | base64url_encode) - -# Combine all parts to get the final JWT token -JWT_TOKEN="${SIGNING_INPUT}.${SIGNATURE}" - -# Output the JWT token -echo $JWT_TOKEN diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/deploy-aws-with-calico.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/deploy-aws-with-calico.sh deleted file mode 100755 index a58b74e22e..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/deploy-aws-with-calico.sh +++ /dev/null @@ -1,253 +0,0 @@ -#!/usr/bin/env bash -set -o errexit - -number=$1 -name=$2 -region=$3 -zone=$4 -twodigits=$(printf "%02d\n" $number) -kindest_node=${KINDEST_NODE:-kindest\/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31} - -if [ -z "$3" ]; then - region=us-east-1 -fi - -if [ -z "$4" ]; then - zone=us-east-1a -fi - -if hostname -I 2>/dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -mkdir -p /tmp/oidc - -cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA -1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL -395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw -zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm -5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 -2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 -ywIDAQAB ------END PUBLIC KEY----- -EOF - -cat <<'EOF' >/tmp/oidc/sa-signer.key ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ -+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui -PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 -+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ -odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 -f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG -el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY -FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh -SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc -r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv -z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn -7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy -3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 -PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy -72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw -BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo -hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn -WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ -y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI -KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 -0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR -f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN -b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc -Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd -qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== ------END RSA PRIVATE KEY----- -EOF - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - extraMounts: - - containerPath: /etc/kubernetes/oidc - hostPath: /tmp/oidc - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -kubeadmConfigPatches: -- | - kind: ClusterConfiguration - apiServer: - extraArgs: - service-account-key-file: /etc/kubernetes/pki/sa.pub - service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub - service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key - service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com - api-audiences: sts.amazonaws.com - extraVolumes: - - name: oidc - hostPath: /etc/kubernetes/oidc - mountPath: /etc/kubernetes/oidc - readOnly: true - metadata: - name: config -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -mkdir -p /tmp/oidc - -cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA -1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL -395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw -zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm -5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 -2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 -ywIDAQAB ------END PUBLIC KEY----- -EOF - -cat <<'EOF' >/tmp/oidc/sa-signer.key ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ -+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui -PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 -+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ -odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 -f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG -el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY -FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh -SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc -r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv -z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn -7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy -3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 -PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy -72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw -BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo -hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn -WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ -y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI -KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 -0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR -f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN -b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc -Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd -qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== ------END RSA PRIVATE KEY----- -EOF - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - extraMounts: - - containerPath: /etc/kubernetes/oidc - hostPath: /tmp/oidc - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -kubeadmConfigPatches: -- | - kind: ClusterConfiguration - apiServer: - extraArgs: - service-account-key-file: /etc/kubernetes/pki/sa.pub - service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub - service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key - service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com - api-audiences: sts.amazonaws.com - extraVolumes: - - name: oidc - hostPath: /etc/kubernetes/oidc - mountPath: /etc/kubernetes/oidc - readOnly: true - metadata: - name: config -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -helm repo add cilium https://helm.cilium.io/ - -helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ - --namespace kube-system \ - --set prometheus.enabled=true \ - --set operator.prometheus.enabled=true \ - --set hubble.enabled=true \ - --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ - --set hubble.relay.enabled=true \ - --set hubble.ui.enabled=true \ - --set kubeProxyReplacement=partial \ - --set hostServices.enabled=false \ - --set hostServices.protocols="tcp" \ - --set externalIPs.enabled=true \ - --set nodePort.enabled=true \ - --set hostPort.enabled=true \ - --set bpf.masquerade=false \ - --set image.pullPolicy=IfNotPresent \ - --set cni.exclusive=false \ - --set ipam.mode=kubernetes -kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -mkdir -p /tmp/oidc - -cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA -1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL -395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw -zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm -5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 -2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 -ywIDAQAB ------END PUBLIC KEY----- -EOF - -cat <<'EOF' >/tmp/oidc/sa-signer.key ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ -+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui -PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 -+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ -odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 -f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG -el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY -FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh -SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc -r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv -z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn -7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy -3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 -PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy -72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw -BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo -hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn -WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ -y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI -KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 -0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR -f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN -b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc -Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd -qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== ------END RSA PRIVATE KEY----- -EOF - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - extraMounts: - - containerPath: /etc/kubernetes/oidc - hostPath: /tmp/oidc - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -kubeadmConfigPatches: -- | - kind: ClusterConfiguration - apiServer: - extraArgs: - service-account-key-file: /etc/kubernetes/pki/sa.pub - service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub - service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key - service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com - api-audiences: sts.amazonaws.com - extraVolumes: - - name: oidc - hostPath: /etc/kubernetes/oidc - mountPath: /etc/kubernetes/oidc - readOnly: true - metadata: - name: config -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat <&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - ipFamily: ipv6 -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].GlobalIPv6Address') -networkkind=$(echo ${ipkind} | rev | cut -d: -f2- | rev): - -#kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}${number}1-${networkkind}${number}9 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -# Preload images -cat << EOF >> images.txt -quay.io/cilium/cilium:v1.15.5 -quay.io/cilium/operator-generic:v1.15.5 -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done - -helm repo add cilium https://helm.cilium.io/ - -helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ - --namespace kube-system \ - --set prometheus.enabled=true \ - --set operator.prometheus.enabled=true \ - --set hubble.enabled=true \ - --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ - --set hubble.relay.enabled=true \ - --set hubble.ui.enabled=true \ - --set kubeProxyReplacement=partial \ - --set hostServices.enabled=false \ - --set hostServices.protocols="tcp" \ - --set externalIPs.enabled=true \ - --set nodePort.enabled=true \ - --set hostPort.enabled=true \ - --set bpf.masquerade=false \ - --set image.pullPolicy=IfNotPresent \ - --set cni.exclusive=false \ - --set ipam.mode=kubernetes -kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -- role: worker - image: ${kindest_node} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -helm repo add cilium https://helm.cilium.io/ - -helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ - --namespace kube-system \ - --set prometheus.enabled=true \ - --set operator.prometheus.enabled=true \ - --set hubble.enabled=true \ - --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ - --set hubble.relay.enabled=true \ - --set hubble.ui.enabled=true \ - --set kubeProxyReplacement=partial \ - --set hostServices.enabled=false \ - --set hostServices.protocols="tcp" \ - --set externalIPs.enabled=true \ - --set nodePort.enabled=true \ - --set hostPort.enabled=true \ - --set bpf.masquerade=false \ - --set image.pullPolicy=IfNotPresent \ - --set cni.exclusive=false \ - --set ipam.mode=kubernetes -kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - disableDefaultCNI: true - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null; then - myip=$(hostname -I | awk '{ print $1 }') -else - myip=$(ipconfig getifaddr en0) -fi - -# Function to determine the next available cluster number -get_next_cluster_number() { - if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then - echo 1 - else - highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) - echo $((highest_num + 1)) - fi -} - -if [ -f /.dockerenv ]; then -myip=$HOST_IP -container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) -docker network connect "kind" $container || true -number=$(get_next_cluster_number) -twodigits=$(printf "%02d\n" $number) -fi - -reg_name='kind-registry' -reg_port='5000' -docker start "${reg_name}" 2>/dev/null || \ -docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 - -cache_port='5000' -cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ -docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 -done - -cat << EOF > kind${number}.yaml -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -nodes: -- role: control-plane - image: ${kindest_node} - extraPortMappings: - - containerPort: 6443 - hostPort: 70${twodigits} - labels: - ingress-ready: true - topology.kubernetes.io/region: ${region} - topology.kubernetes.io/zone: ${zone} -networking: - serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" - podSubnet: "10.1${twodigits}.0.0/16" -containerdConfigPatches: -- |- - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] - endpoint = ["http://${reg_name}:${reg_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] - endpoint = ["http://docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] - endpoint = ["http://us-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] - endpoint = ["http://us-central1-docker:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] - endpoint = ["http://quay:${cache_port}"] - [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] - endpoint = ["http://gcr:${cache_port}"] -EOF - -kind create cluster --name kind${number} --config kind${number}.yaml - -ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') -networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') - -kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true - -docker network connect "kind" "${reg_name}" || true -docker network connect "kind" docker || true -docker network connect "kind" us-docker || true -docker network connect "kind" us-central1-docker || true -docker network connect "kind" quay || true -docker network connect "kind" gcr || true - -# Preload images -cat << EOF >> images.txt -quay.io/metallb/controller:v0.13.12 -quay.io/metallb/speaker:v0.13.12 -EOF -cat images.txt | while read image; do - docker pull $image || true - kind load docker-image $image --name kind${number} || true -done -for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done -kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" -kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true - -cat << EOF > metallb${number}.yaml -apiVersion: metallb.io/v1beta1 -kind: IPAddressPool -metadata: - name: first-pool - namespace: metallb-system -spec: - addresses: - - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: empty - namespace: metallb-system -EOF - -printf "Create IPAddressPool in kind-kind${number}\n" -for i in {1..10}; do -kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break -sleep 2 -done - -# connect the registry to the cluster network if not already connected -printf "Renaming context kind-kind${number} to ${name}\n" -for i in {1..100}; do - (kubectl config get-contexts -oname | grep ${name}) && break - kubectl config rename-context kind-kind${number} ${name} && break - printf " $i"/100 - sleep 2 - [ $i -lt 100 ] || exit 1 -done - -# Document the local registry -# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry -cat </dev/null -./istio-*/bin/istioctl --context cluster1 pc all -n istio-gateways deploy/istio-ingressgateway -o json > /tmp/current-output -json-diff /tmp/previous-output /tmp/current-output diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/kubectl.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/kubectl.sh deleted file mode 100755 index 8982250d27..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/kubectl.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash - -echo "kubectl apply -f-</dev/null || true" -sed -n '/```bash/,/```/p; //p' | egrep -v '```|' | sed '/#IGNORE_ME/d' diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/register-domain.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/register-domain.sh deleted file mode 100755 index f9084487e8..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/register-domain.sh +++ /dev/null @@ -1,51 +0,0 @@ -#!/usr/bin/env bash - -# Check if the correct number of arguments is provided -if [ "$#" -ne 2 ]; then - echo "Usage: $0 " - exit 1 -fi - -# Variables -hostname="$1" -new_ip_or_domain="$2" -hosts_file="/etc/hosts" - -# Function to check if the input is a valid IP address -is_ip() { - if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true - else - return 1 # 1 = false - fi -} - -# Function to resolve domain to the first IPv4 address using dig -resolve_domain() { - # Using dig to query A records, and awk to parse the first IPv4 address - dig +short A "$1" | awk '/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/ {print; exit}' -} - -# Validate new_ip_or_domain or resolve domain to IP -if is_ip "$new_ip_or_domain"; then - new_ip="$new_ip_or_domain" -else - new_ip=$(resolve_domain "$new_ip_or_domain") - if [ -z "$new_ip" ]; then - echo "Failed to resolve domain to an IPv4 address." - exit 1 - fi -fi - -# Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then - # Update the existing entry with the new IP - tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" - sudo cp "$tempfile" "$hosts_file" - echo "Updated $hostname in $hosts_file with new IP: $new_ip" -else - # Add a new entry if it doesn't exist - echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null - echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/snapdiff.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/snapdiff.sh deleted file mode 100755 index 51786826eb..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/snapdiff.sh +++ /dev/null @@ -1,6 +0,0 @@ -mv /tmp/current-output /tmp/previous-output 2>/dev/null -pod=$(kubectl --context ${MGMT} -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}') -kubectl --context ${MGMT} -n gloo-mesh debug -q -i ${pod} --image=curlimages/curl -- curl -s http://localhost:9091/snapshots/output | jq '.translator | . as $root | ($root | keys[]) as $namespace | ($root[$namespace] | keys[]) as $parent | if $root[$namespace][$parent].Outputs then (($root[$namespace][$parent].Outputs | keys[]) as $object | ($object | split(",")) as $arr | {apiVersion: $arr[0], kind: ($arr[1] |split("=")[1])} + $root[$namespace][$parent].Outputs[$object][]) else empty end' | jq --slurp > /tmp/current-output -array1=$(cat /tmp/previous-output | jq -e '') -array2=$(cat /tmp/current-output | jq -e '') -jq -n --argjson array1 "$array1" --argjson array2 "$array2" '{"array1": $array1,"array2":$array2} | .array2-.array1' | docker run -i --rm mikefarah/yq -P '.' \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/scripts/timestamped_output.sh b/gloo-mesh/core/byo-redis/2-6/default/scripts/timestamped_output.sh deleted file mode 100755 index b1f741613e..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/scripts/timestamped_output.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash - -# Read input line by line and prepend a timestamp -while IFS= read -r line; do - echo "$(date '+%Y-%m-%d %H:%M:%S') $line" -done diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/can-resolve.test.js.liquid b/gloo-mesh/core/byo-redis/2-6/default/tests/can-resolve.test.js.liquid deleted file mode 100644 index 7d1163da97..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/can-resolve.test.js.liquid +++ /dev/null @@ -1,17 +0,0 @@ -const dns = require('dns'); -const chaiHttp = require("chai-http"); -const chai = require("chai"); -const expect = chai.expect; -chai.use(chaiHttp); -const { waitOnFailedTest } = require('./tests/utils'); - -afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); - -describe("Address '" + process.env.{{ to_resolve }} + "' can be resolved in DNS", () => { - it(process.env.{{ to_resolve }} + ' can be resolved', (done) => { - return dns.lookup(process.env.{{ to_resolve }}, (err, address, family) => { - expect(address).to.be.an.ip; - done(); - }); - }); -}); \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/chai-exec.js b/gloo-mesh/core/byo-redis/2-6/default/tests/chai-exec.js deleted file mode 100644 index 2164f5b247..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/chai-exec.js +++ /dev/null @@ -1,126 +0,0 @@ -const jsYaml = require('js-yaml'); -const deepObjectDiff = require('deep-object-diff'); -const chaiExec = require("@jsdevtools/chai-exec"); -const chai = require("chai"); -const expect = chai.expect; -const should = chai.should(); -chai.use(chaiExec); -const utils = require('./utils'); -chai.config.truncateThreshold = 4000; // length threshold for actual and expected values in assertion errors - -global = { - checkKubernetesObject: async ({ context, namespace, kind, k8sObj, yaml }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + kind + " " + k8sObj + " -o json"; - let cli = chaiExec(command); - let json = jsYaml.load(yaml) - - cli.should.exit.with.code(0); - cli.stderr.should.be.empty; - let data = JSON.parse(cli.stdout); - let diff = deepObjectDiff.detailedDiff(json, data); - let expectedObject = false; - console.log(Object.keys(diff.deleted).length); - if (Object.keys(diff.updated).length === 0 && Object.keys(diff.deleted).length === 0) { - expectedObject = true; - } - expect(expectedObject, "The following object can't be found or is not as expected:\n" + yaml).to.be.true; - }, - checkDeployment: async ({ context, namespace, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get deploy " + k8sObj + " -o jsonpath='{.status}'"; - let cli = chaiExec(command); - cli.stderr.should.be.empty; - let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).readyReplicas || 0; - let replicas = JSON.parse(cli.stdout.slice(1, -1)).replicas; - if (readyReplicas != replicas) { - console.log(" ----> " + k8sObj + " in " + context + " not ready..."); - await utils.sleep(1000); - } - cli.should.exit.with.code(0); - readyReplicas.should.equal(replicas); - }, - checkDeploymentHasPod: async ({ context, namespace, deployment }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get deploy " + deployment + " -o name'"; - let cli = chaiExec(command); - cli.stderr.should.be.empty; - cli.stdout.should.not.be.empty; - cli.stdout.should.contain(deployment); - }, - checkDeploymentsWithLabels: async ({ context, namespace, labels, instances }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get deploy -l " + labels + " -o jsonpath='{.items}'"; - let cli = chaiExec(command); - cli.stderr.should.be.empty; - let deployments = JSON.parse(cli.stdout.slice(1, -1)); - expect(deployments).to.have.lengthOf(instances); - deployments.forEach((deployment) => { - let readyReplicas = deployment.status.readyReplicas || 0; - let replicas = deployment.status.replicas; - if (readyReplicas != replicas) { - console.log(" ----> " + deployment.metadata.name + " in " + context + " not ready..."); - utils.sleep(1000); - } - cli.should.exit.with.code(0); - readyReplicas.should.equal(replicas); - }); - }, - checkStatefulSet: async ({ context, namespace, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get sts " + k8sObj + " -o jsonpath='{.status}'"; - let cli = chaiExec(command); - cli.stderr.should.be.empty; - let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).readyReplicas || 0; - let replicas = JSON.parse(cli.stdout.slice(1, -1)).replicas; - if (readyReplicas != replicas) { - console.log(" ----> " + k8sObj + " in " + context + " not ready..."); - await utils.sleep(1000); - } - cli.should.exit.with.code(0); - readyReplicas.should.equal(replicas); - }, - checkDaemonSet: async ({ context, namespace, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get ds " + k8sObj + " -o jsonpath='{.status}'"; - let cli = chaiExec(command); - cli.stderr.should.be.empty; - let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).numberReady || 0; - let replicas = JSON.parse(cli.stdout.slice(1, -1)).desiredNumberScheduled; - if (readyReplicas != replicas) { - console.log(" ----> " + k8sObj + " in " + context + " not ready..."); - await utils.sleep(1000); - } - cli.should.exit.with.code(0); - readyReplicas.should.equal(replicas); - }, - k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; - let cli = chaiExec(command); - - cli.stderr.should.be.empty; - cli.should.exit.with.code(0); - }, - genericCommand: async ({ command, responseContains = "" }) => { - let cli = chaiExec(command); - if (cli.stderr && cli.stderr != "") { - console.log(" ----> " + command + " not succesful: " + cli.stderr); - await utils.sleep(1000); - } - cli.stderr.should.be.empty; - cli.should.exit.with.code(0); - if (responseContains != "") { - cli.stdout.should.contain(responseContains); - } - }, - getOutputForCommand: ({ command }) => { - let cli = chaiExec(command); - return cli.stdout; - }, - curlInPod: ({ curlCommand, podName, namespace }) => { - return global.getOutputForCommand({ command: `kubectl -n ${namespace} debug -i -q ${podName} --image=curlimages/curl -- sh -c \'${curlCommand}\'` }).replaceAll("'", ""); - }, -}; - -module.exports = global; - -afterEach(function (done) { - if (this.currentTest.currentRetry() > 0 && this.currentTest.currentRetry() % 5 === 0) { - console.log(`Test "${this.currentTest.fullTitle()}" retry: ${this.currentTest.currentRetry()}`); - } - utils.waitOnFailedTest(done, this.currentTest.currentRetry()) -}); diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/chai-http.js b/gloo-mesh/core/byo-redis/2-6/default/tests/chai-http.js deleted file mode 100644 index 9d989260e9..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/chai-http.js +++ /dev/null @@ -1,101 +0,0 @@ -const chaiHttp = require("chai-http"); -const chai = require("chai"); -const expect = chai.expect; -chai.use(chaiHttp); -const utils = require('./utils'); -const fs = require("fs"); - -process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; -process.env.NODE_NO_WARNINGS = 1; -chai.config.truncateThreshold = 4000; // length threshold for actual and expected values in assertion errors - -global = { - checkURL: ({ host, path = "", headers = [], certFile = '', keyFile = '', retCode }) => { - let cert = certFile ? fs.readFileSync(certFile) : ''; - let key = keyFile ? fs.readFileSync(keyFile) : ''; - let request = chai.request(host).head(path).redirects(0).cert(cert).key(key); - headers.forEach(header => request.set(header.key, header.value)); - return request - .send() - .then(async function (res) { - expect(res).to.have.status(retCode); - }); - }, - checkBody: ({ host, path = "", headers = [], body = '', certFile = '', keyFile = '', method = "get", data = "", match = true }) => { - let cert = certFile ? fs.readFileSync(certFile) : ''; - let key = keyFile ? fs.readFileSync(keyFile) : ''; - let request = chai.request(host); - if (method === "get") { - request = request.get(path).redirects(0).cert(cert).key(key); - } else if (method === "post") { - request = request.post(path).redirects(0); - } else if (method === "put") { - request = request.put(path).redirects(0); - } else if (method === "head") { - request = request.head(path).redirects(0); - } else { - throw 'The requested method is not implemented.' - } - headers.forEach(header => request.set(header.key, header.value)); - return request - .send(data) - .then(async function (res) { - if (match) { - expect(res.text).to.contain(body); - } else { - expect(res.text).not.to.contain(body); - } - }); - }, - checkHeaders: ({ host, path = "", headers = [], certFile = '', keyFile = '', expectedHeaders = [] }) => { - let cert = certFile ? fs.readFileSync(certFile) : ''; - let key = keyFile ? fs.readFileSync(keyFile) : ''; - let request = chai.request(host).get(path).redirects(0).cert(cert).key(key); - headers.forEach(header => request.set(header.key, header.value)); - return request - .send() - .then(async function (res) { - expectedHeaders.forEach(header => { - if (header.value === '*') { - expect(res.header).to.have.property(header.key); - } else { - expect(res.header[header.key]).to.equal(header.value); - } - }); - }); - }, - checkWithMethod: ({ host, path, headers = [], method = "get", certFile = '', keyFile = '', retCode }) => { - let cert = certFile ? fs.readFileSync(certFile) : ''; - let key = keyFile ? fs.readFileSync(keyFile) : ''; - var request = chai.request(host); - switch (method) { - case 'get': - request = request.get(path); - break; - case 'post': - request = request.post(path); - break; - case 'put': - request = request.put(path); - break; - default: - throw 'The requested method is not implemented.' - } - request.cert(cert).key(key).redirects(0); - headers.forEach(header => request.set(header.key, header.value)); - return request - .send() - .then(async function (res) { - expect(res).to.have.status(retCode); - }); - } -}; - -module.exports = global; - -afterEach(function (done) { - if (this.currentTest.currentRetry() > 0 && this.currentTest.currentRetry() % 5 === 0) { - console.log(`Test "${this.currentTest.fullTitle()}" retry: ${this.currentTest.currentRetry()}`); - } - utils.waitOnFailedTest(done, this.currentTest.currentRetry()) -}); \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.js b/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.js deleted file mode 100644 index a3bd686d9c..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.js +++ /dev/null @@ -1,248 +0,0 @@ -// k8s-cr-watcher.js - -const k8s = require('@kubernetes/client-node'); -const yaml = require('js-yaml'); -const diff = require('deep-diff').diff; - -function delay(ms) { - return new Promise(resolve => setTimeout(resolve, ms)); -} - -function sanitizeObject(obj) { - const sanitized = JSON.parse(JSON.stringify(obj)); - if (sanitized.metadata) { - delete sanitized.metadata.managedFields; - delete sanitized.metadata.generation; - delete sanitized.metadata.resourceVersion; - delete sanitized.metadata.creationTimestamp; - } - return sanitized; -} - -function getValueAtPath(obj, pathArray) { - return pathArray.reduce((acc, key) => (acc && acc[key] !== undefined) ? acc[key] : undefined, obj); -} - -// Helper function to format differences into a human-readable string -function formatDifferences(differences, previousObj, currentObj) { - let output = ''; - const handledArrayPaths = new Set(); - - differences.forEach(d => { - const path = d.path.join('.'); - if (d.kind === 'A') { - const arrayPath = d.path.join('.'); - if (!handledArrayPaths.has(arrayPath)) { - const beforeArray = getValueAtPath(previousObj, d.path); - const afterArray = getValueAtPath(currentObj, d.path); - - output += `• ${arrayPath}:\n\nBefore:\n${yaml.dump(beforeArray).trim().split('\n').join('\n')}\nAfter:\n${yaml.dump(afterArray).trim().split('\n').join('\n')}\n`; - handledArrayPaths.add(arrayPath); - } - } else { - // Check if this change is part of an already handled array - const isPartOfHandledArray = Array.from(handledArrayPaths).some(arrayPath => path.startsWith(arrayPath)); - - if (!isPartOfHandledArray) { - switch (d.kind) { - case 'E': // Edit - output += `• ${path}: '${JSON.stringify(d.lhs)}' => '${JSON.stringify(d.rhs)}'\n`; - break; - case 'N': // New - output += `• ${path}: Added '${JSON.stringify(d.rhs)}'\n`; - break; - case 'D': // Deleted - output += `• ${path}: Removed '${JSON.stringify(d.lhs)}'\n`; - break; - default: - output += `• ${path}: Changed\n`; - } - } - } - }); - - return output; -} - -// Function to extract change information from an event -function extractChangeInfo(type, apiObj, previousObj, currentObj) { - const name = apiObj.metadata.name; - const namespace = apiObj.metadata.namespace; - const kind = apiObj.kind; - const apiVersion = apiObj.apiVersion; - - let changeInfo = `${type}: ${kind} "${name}"`; - if (namespace) { - changeInfo += ` in namespace "${namespace}"`; - } - changeInfo += ` (apiVersion: ${apiVersion})`; - - if (type === 'MODIFIED' && previousObj) { - const differences = diff(previousObj, apiObj); - if (differences && differences.length > 0) { - // Filter out non-essential diffs - const essentialDifferences = differences.filter(d => { - const path = d.path.join('.'); - return !path.startsWith('metadata.generation') && - !path.startsWith('metadata.resourceVersion') && - !path.startsWith('metadata.creationTimestamp'); - }); - - if (essentialDifferences.length > 0) { - changeInfo += '\n\nDifferences:\n' + formatDifferences(essentialDifferences, previousObj, apiObj); - } else { - changeInfo += '\n\nNo meaningful differences detected'; - } - } else { - changeInfo += '\n\nNo differences detected'; - } - } - - return changeInfo; -} - -async function watchCRs(contextName, delaySeconds, durationSeconds) { - let changeCount = 0; - let isWatchSetupComplete = false; - - console.log(`Waiting for ${delaySeconds} seconds before starting the test...`); - await delay(delaySeconds * 1000); - console.log('Delay complete. Starting the test.'); - - const kc = new k8s.KubeConfig(); - kc.loadFromDefault(); - - const contexts = kc.getContexts(); - const context = contexts.find(c => c.name === contextName); - - kc.setCurrentContext(contextName); - - const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi); - const apisApi = kc.makeApiClient(k8s.ApisApi); - - async function getResources(group, version) { - try { - const { body } = await k8sApi.listClusterCustomObject(group, version, ''); - return body.resources || []; - } catch (error) { - console.error(`Error getting resources for ${group}/${version}: ${error}`); - return []; - } - } - - // Function to watch a specific CR - async function watchCR(group, version, plural, abortController) { - const watch = new k8s.Watch(kc); - let resourceVersion; - - try { - // Get the latest resourceVersion - const listResponse = await k8sApi.listClusterCustomObject(group, version, plural); - resourceVersion = listResponse.body.metadata.resourceVersion; - - // Cache of previous objects (sanitized) - const objectCache = {}; - - // Initialize the object cache - if (listResponse.body.items) { - listResponse.body.items.forEach(item => { - objectCache[item.metadata.uid] = sanitizeObject(item); - }); - } - - await watch.watch( - `/apis/${group}/${version}/${plural}`, - { - abortSignal: abortController.signal, - allowWatchBookmarks: true, - resourceVersion: resourceVersion - }, - (type, apiObj) => { - if (isWatchSetupComplete) { - const uid = apiObj.metadata.uid; - - // Sanitize the current object by removing non-essential metadata - const sanitizedObj = sanitizeObject(apiObj); - - let previousObj = objectCache[uid]; - - if (previousObj) { - // Clone previousObj to avoid mutation - previousObj = JSON.parse(JSON.stringify(previousObj)); - } - - if (type === 'ADDED' || type === 'MODIFIED' || type === 'DELETED') { - const changeInfo = extractChangeInfo(type, sanitizedObj, previousObj, sanitizedObj); - - // Only log meaningful changes - if (type === 'MODIFIED' && changeInfo.includes('No meaningful differences detected')) { - // Skip logging if there are no meaningful changes - return; - } - - console.log(changeInfo); - console.log('---'); - console.log(yaml.dump(sanitizedObj).trim()); // Display the full object in YAML - console.log('---'); - - if (type === 'DELETED') { - delete objectCache[uid]; - } else { - objectCache[uid] = sanitizedObj; - } - - changeCount++; - } - } - }, - (err) => { - if (err && err.message !== 'aborted') { - console.error(`Error watching ${group}/${version}/${plural}: ${err}`); - } - } - ); - } catch (error) { - if (error.message !== 'aborted') { - console.error(`Error setting up watch for ${group}/${version}/${plural}: ${error}`); - } - } - } - - console.log(`Using context: ${contextName}`); - console.log(`Watching for CR changes with apiVersion containing "istio" or "gloo" for ${durationSeconds} seconds...`); - - const abortController = new AbortController(); - const watchPromises = []; - - const { body: apiGroups } = await apisApi.getAPIVersions(); - - for (const group of apiGroups.groups) { - if (group.name.includes('istio') || group.name.includes('gloo')) { - const latestVersion = group.preferredVersion || group.versions[0]; - const resources = await getResources(group.name, latestVersion.version); - - for (const resource of resources) { - if (resource.kind && resource.name && !resource.name.includes('/')) { - watchPromises.push(watchCR(group.name, latestVersion.version, resource.name, abortController)); - } - } - } - } - - console.log("Watch setup complete. Listening for changes..."); - console.log('---'); - - isWatchSetupComplete = true; - - await new Promise(resolve => setTimeout(resolve, durationSeconds * 1000)); - - abortController.abort(); - console.log(`Watch completed after ${durationSeconds} seconds.`); - console.log(`Total changes detected: ${changeCount}`); - - await Promise.allSettled(watchPromises); - - return changeCount; -} - -module.exports = { watchCRs }; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.test.js.liquid b/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.test.js.liquid deleted file mode 100644 index 85ff59def2..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/k8s-changes.test.js.liquid +++ /dev/null @@ -1,25 +0,0 @@ -const assert = require('assert'); -const { watchCRs } = require('./tests/k8s-changes'); - -describe('Kubernetes CR Watcher', function() { - let contextName = process.env.{{ context | default: "CLUSTER1" }}; - let delaySeconds = {{ delay | default: 5 }}; - let durationSeconds = {{ duration | default: 10 }}; - let changeCount = 0; - - it(`No CR changed in context ${contextName} for ${durationSeconds} seconds`, async function() { - this.timeout((durationSeconds + delaySeconds + 10) * 1000); - - changeCount = await watchCRs(contextName, delaySeconds, durationSeconds); - - assert.strictEqual(changeCount, 0, `Test failed: ${changeCount} changes were detected`); - }); - - after(function(done) { - setTimeout(() => { - process.exit(changeCount); - }, 1000); - - done(); - }); -}); \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak-token.js b/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak-token.js deleted file mode 100644 index 3ac1a691db..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak-token.js +++ /dev/null @@ -1,4 +0,0 @@ -const keycloak = require('./keycloak'); -const { argv } = require('node:process'); - -keycloak.getKeyCloakCookie(argv[2], argv[3]); diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak.js b/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak.js deleted file mode 100644 index 08019536b0..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/keycloak.js +++ /dev/null @@ -1,47 +0,0 @@ -const puppeteer = require('puppeteer'); -//const utils = require('./utils'); - -global = { - getKeyCloakCookie: async (url, user) => { - const browser = await puppeteer.launch({ - headless: "new", - ignoreHTTPSErrors: true, - args: ['--no-sandbox', '--disable-setuid-sandbox'], // needed for instruqt - }); - // Create a new browser context - const context = await browser.createBrowserContext(); - const page = await context.newPage(); - await page.goto(url); - await page.waitForNetworkIdle({ options: { timeout: 1000 } }); - //await utils.sleep(1000); - - // Enter credentials - await page.screenshot({path: 'screenshot.png'}); - await page.waitForSelector('#username', { options: { timeout: 1000 } }); - await page.waitForSelector('#password', { options: { timeout: 1000 } }); - await page.type('#username', user); - await page.type('#password', 'password'); - await page.click('#kc-login'); - await page.waitForNetworkIdle({ options: { timeout: 1000 } }); - //await utils.sleep(1000); - - // Retrieve session cookie - const cookies = await page.cookies(); - const sessionCookie = cookies.find(cookie => cookie.name === 'keycloak-session'); - let ret; - if (sessionCookie) { - ret = `${sessionCookie.name}=${sessionCookie.value}`; // Construct the cookie string - } else { - // console.error(await page.content()); // very verbose - await page.screenshot({path: 'screenshot.png'}); - console.error(` No session cookie found for ${user}`); - ret = "keycloak-session=dummy"; - } - await context.close(); - await browser.close(); - console.log(ret); - return ret; - } -}; - -module.exports = global; diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/base.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/base.js deleted file mode 100644 index 426633ecc4..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/base.js +++ /dev/null @@ -1,15 +0,0 @@ -const { logDebug } = require('../utils/logging'); - -class BasePage { - constructor(page) { - this.page = page; - } - - async navigateTo(url) { - logDebug(`Navigating to ${url}`); - await this.page.goto(url, { waitUntil: 'networkidle2' }); - logDebug('Navigation complete'); - } -} - -module.exports = BasePage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/constants.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/constants.js deleted file mode 100644 index 17068fbf55..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/constants.js +++ /dev/null @@ -1,13 +0,0 @@ -const InsightType = { - BP: 'BP', - CFG: 'CFG', - HLT: 'HLT', - ING: 'ING', - RES: 'RES', - RTE: 'RTE', - SEC: 'SEC', -}; - -module.exports = { - InsightType, -}; diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-api-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-api-page.js deleted file mode 100644 index 87a55d3589..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-api-page.js +++ /dev/null @@ -1,29 +0,0 @@ -class DeveloperPortalAPIPage { - constructor(page) { - this.page = page; - - // Selectors - this.apiBlocksSelector = 'a[href^="/apis/"]'; - } - - async navigateTo(url) { - await this.page.goto(url, { waitUntil: 'networkidle2' }); - } - - async getAPIProducts() { - await this.page.waitForSelector(this.apiBlocksSelector, { visible: true }); - - const apiBlocks = await this.page.evaluate((selector) => { - const blocks = document.querySelectorAll(selector); - - return Array.from(blocks).map(block => { - const blockHTML = block.outerHTML; - return blockHTML; - }); - }, this.apiBlocksSelector); - - return apiBlocks; - } -} - -module.exports = DeveloperPortalAPIPage; diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-home-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-home-page.js deleted file mode 100644 index 343be35345..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/developer-portal-home-page.js +++ /dev/null @@ -1,32 +0,0 @@ -class DeveloperPortalHomePage { - constructor(page) { - this.page = page; - - // Selectors - this.loginLink = 'a[href="/v1/login"]'; - this.userHolder = '[class="userHolder"]'; - } - - async navigateTo(url) { - await this.page.goto(url, { waitUntil: 'networkidle2' }); - } - - async clickLogin() { - await this.page.waitForSelector(this.loginLink, { visible: true }); - await this.page.click(this.loginLink); - } - - async getLoggedInUserName() { - await this.page.waitForSelector(this.userHolder, { visible: true }); - - const username = await this.page.evaluate(() => { - const userHolderDiv = document.querySelector('.userHolder'); - const text = userHolderDiv ? userHolderDiv.textContent.trim() : ''; - return text.replace(/]*>([\s\S]*?)<\/svg>/g, '').trim(); - }); - - return username; - } -} - -module.exports = DeveloperPortalHomePage; diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/graph-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/graph-page.js deleted file mode 100644 index 25527dc275..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/graph-page.js +++ /dev/null @@ -1,90 +0,0 @@ -const BasePage = require("../base"); - -class GraphPage extends BasePage { - constructor(page) { - super(page) - - // Selectors - this.clusterDropdownButton = '[data-testid="cluster-dropdown"] button'; - this.selectCheckbox = (value) => `input[type="checkbox"][value="${value}"]`; - this.namespaceDropdownButton = '[data-testid="namespace-dropdown"] button'; - this.fullscreenButton = '[data-testid="graph-fullscreen-button"]'; - this.centerButton = '[data-testid="graph-center-button"]'; - this.canvasSelector = '[data-testid="graph-screenshot-container"]'; - this.layoutSettingsButton = '[data-testid="graph-layout-settings-button"]'; - this.ciliumNodesButton = '[data-testid="graph-cilium-toggle"]'; - this.disableCiliumNodesButton = '[data-testid="graph-cilium-toggle"][aria-checked="true"]'; - this.enableCiliumNodesButton = '[data-testid="graph-cilium-toggle"][aria-checked="false"]'; - - } - - async selectClusters(clusters) { - await this.page.waitForSelector(this.clusterDropdownButton, { visible: true }); - await this.page.click(this.clusterDropdownButton); - for (const cluster of clusters) { - await this.page.waitForSelector(this.selectCheckbox(cluster), { visible: true }); - await this.page.click(this.selectCheckbox(cluster)); - await new Promise(resolve => setTimeout(resolve, 50)); - } - } - - async selectNamespaces(namespaces) { - await this.page.click(this.namespaceDropdownButton); - for (const namespace of namespaces) { - await this.page.waitForSelector(this.selectCheckbox(namespace), { visible: true }); - await this.page.click(this.selectCheckbox(namespace)); - await new Promise(resolve => setTimeout(resolve, 50)); - } - } - - async toggleLayoutSettings() { - await this.page.waitForSelector(this.layoutSettingsButton, { visible: true, timeout: 5000 }); - await this.page.click(this.layoutSettingsButton); - // Toggle Layout settings takes a while to open, subsequent actions will fail if we don't wait - await new Promise(resolve => setTimeout(resolve, 1000)); - } - - async enableCiliumNodes() { - const ciliumNodesButtonExists = await this.page.$(this.ciliumNodesButton) !== null; - if (ciliumNodesButtonExists) { - await this.page.waitForSelector(this.enableCiliumNodesButton, { visible: true, timeout: 5000 }); - await this.page.click(this.enableCiliumNodesButton); - } - } - - async disableCiliumNodes() { - const ciliumNodesButtonExists = await this.page.$(this.ciliumNodesButton) !== null; - if (ciliumNodesButtonExists) { - await this.page.waitForSelector(this.disableCiliumNodesButton, { visible: true, timeout: 5000 }); - await this.page.click(this.disableCiliumNodesButton); - } - } - - async fullscreenGraph() { - await this.page.click(this.fullscreenButton); - await new Promise(resolve => setTimeout(resolve, 150)); - } - - async centerGraph() { - await this.page.click(this.centerButton); - await new Promise(resolve => setTimeout(resolve, 150)); - } - - async waitForLoadingContainerToDisappear(timeout = 50000) { - await this.page.waitForFunction( - () => !document.querySelector('[data-testid="loading-container"]'), - { timeout } - ); - } - - async captureCanvasScreenshot(screenshotPath) { - await this.page.waitForSelector(this.canvasSelector, { visible: true, timeout: 5000 }); - await this.waitForLoadingContainerToDisappear(); - await this.page.waitForNetworkIdle({ timeout: 5000, idleTime: 500, maxInflightRequests: 0 }); - - const canvas = await this.page.$(this.canvasSelector); - await canvas.screenshot({ path: screenshotPath, omitBackground: true }); - } -} - -module.exports = GraphPage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/overview-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/overview-page.js deleted file mode 100644 index 77dd95abcd..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/overview-page.js +++ /dev/null @@ -1,30 +0,0 @@ -const BasePage = require("../base"); - -class OverviewPage extends BasePage { - constructor(page) { - super(page) - - // Selectors - this.listedWorkspacesLinks = 'div[data-testid="overview-area"] div[data-testid="solo-link"] a'; - this.licensesButton = 'button[data-testid="topbar-licenses-toggle"]'; - } - - async getListedWorkspaces() { - await this.page.waitForSelector(this.listedWorkspacesLinks, { visible: true, timeout: 5000 }); - - const workspaceNames = await this.page.evaluate((selector) => { - const links = document.querySelectorAll(selector); - - return Array.from(links).map(link => link.textContent.trim()); - }, this.listedWorkspacesLinks); - - return workspaceNames; - } - - async hasPageLoaded() { - await this.page.waitForSelector(this.licensesButton, { visible: true, timeout: 5000 }); - return true; - } -} - -module.exports = OverviewPage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/welcome-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/welcome-page.js deleted file mode 100644 index 3c025ae3df..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/gloo-ui/welcome-page.js +++ /dev/null @@ -1,17 +0,0 @@ -const BasePage = require("../base"); - -class WelcomePage extends BasePage { - constructor(page) { - super(page); - - // Selectors - this.signInButton = 'button'; - } - - async clickSignIn() { - await this.page.waitForSelector(this.signInButton, { visible: true, timeout: 5000 }); - await this.page.click(this.signInButton); - } -} - -module.exports = WelcomePage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/insights-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/insights-page.js deleted file mode 100644 index 6221ae93ca..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/insights-page.js +++ /dev/null @@ -1,106 +0,0 @@ -class InsightsPage { - constructor(page) { - this.page = page; - - // Selectors - this.insightTypeQuickFilters = { - healthy: '[data-testid="health-count-box-healthy"]', - warning: '[data-testid="health-count-box-warning"]', - error: '[data-testid="health-count-box-error"]' - }; - this.clusterDropdownButton = '[data-testid="search by cluster...-dropdown"] button'; - this.filterByTypeDropdown = '[data-testid="filter by type...-dropdown"] button'; - this.clearAllButton = '[data-testid="solo-tag"]:first-child'; - this.tableHeaders = '.ant-table-thead th'; - this.tableRows = '.ant-table-tbody tr'; - this.paginationTotalText = '.ant-pagination-total-text'; - this.selectCheckbox = (name) => `input[type="checkbox"][value="${name}"]`; - } - - async navigateTo(url) { - await this.page.goto(url, { waitUntil: 'networkidle2' }); - } - - async getHealthyResourcesCount() { - return parseInt(await this.page.$eval(this.insightTypeQuickFilters.healthy, el => el.querySelector('div').textContent)); - } - - async getWarningResourcesCount() { - return parseInt(await this.page.$eval(this.insightTypeQuickFilters.warning, el => el.querySelector('div').textContent)); - } - - async getErrorResourcesCount() { - return parseInt(await this.page.$eval(this.insightTypeQuickFilters.error, el => el.querySelector('div').textContent)); - } - - - async openFilterByTypeDropdown() { - await this.page.waitForSelector(this.filterByTypeDropdown, { visible: true }); - await this.page.click(this.filterByTypeDropdown); - } - - async openSearchByClusterDropdown() { - await this.page.waitForSelector(this.clusterDropdownButton, { visible: true }); - await this.page.click(this.clusterDropdownButton); - } - - async clearAllFilters() { - await this.page.click(this.clearAllButton); - } - - async getTableHeaders() { - return this.page.$$eval(this.tableHeaders, headers => headers.map(h => h.textContent.trim())); - } - - /** - * Returns a string of arrays for each row. - * @returns {Promise} The table data rows as a string of arrays. - */ - async getTableDataRows() { - const rowsData = await this.page.$$eval(this.tableRows, rows => - rows.map(row => { - const cells = row.querySelectorAll('td'); - const rowData = []; - for (const cell of cells) { - rowData.push(cell.textContent.trim()); - } - return rowData.join(' '); - }) - ); - return rowsData; - } - - async clickDetailsButton(rowIndex) { - const buttons = await this.page.$$(this.detailsButton); - if (rowIndex < buttons.length) { - await buttons[rowIndex].click(); - } else { - throw new Error(`Row index ${rowIndex} is out of bounds`); - } - } - - async getTotalItemsCount() { - const totalText = await this.page.$eval(this.paginationTotalText, el => el.textContent); - return parseInt(totalText.match(/Total (\d+) items/)[1]); - } - - async selectClusters(clusters) { - this.openSearchByClusterDropdown(); - for (const cluster of clusters) { - await this.page.waitForSelector(this.selectCheckbox(cluster), { visible: true }); - await this.page.click(this.selectCheckbox(cluster)); - await new Promise(resolve => setTimeout(resolve, 50)); - } - } - - async selectInsightTypes(types) { - this.openFilterByTypeDropdown(); - for (const type of types) { - await this.page.waitForSelector(this.selectCheckbox(type), { visible: true }); - await this.page.click(this.selectCheckbox(type)); - await new Promise(resolve => setTimeout(resolve, 50)); - } - } -} - -module.exports = InsightsPage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/keycloak-sign-in-page.js b/gloo-mesh/core/byo-redis/2-6/default/tests/pages/keycloak-sign-in-page.js deleted file mode 100644 index e4d9f36c6f..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/pages/keycloak-sign-in-page.js +++ /dev/null @@ -1,27 +0,0 @@ -class KeycloakSignInPage { - constructor(page) { - this.page = page; - - // Selectors - this.usernameInput = '#username'; - this.passwordInput = '#password'; - this.loginButton = '#kc-login'; - this.showPasswordButton = 'button[data-password-toggle]'; - } - - async signIn(username, password) { - await new Promise(resolve => setTimeout(resolve, 50)); - await this.page.waitForSelector(this.usernameInput, { visible: true }); - await this.page.type(this.usernameInput, username); - - await new Promise(resolve => setTimeout(resolve, 50)); - await this.page.waitForSelector(this.passwordInput, { visible: true }); - await this.page.type(this.passwordInput, password); - - await new Promise(resolve => setTimeout(resolve, 50)); - await this.page.waitForSelector(this.loginButton, { visible: true }); - await this.page.click(this.loginButton); - } -} - -module.exports = KeycloakSignInPage; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/utils.js b/gloo-mesh/core/byo-redis/2-6/default/tests/utils.js deleted file mode 100644 index 9747efaa2c..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/utils.js +++ /dev/null @@ -1,13 +0,0 @@ -global = { - sleep: ms => new Promise(resolve => setTimeout(resolve, ms)), - waitOnFailedTest: (done, currentRetry) => { - if(currentRetry > 0){ - process.stdout.write("."); - setTimeout(done, 1000); - } else { - done(); - } - } -}; - -module.exports = global; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/enhance-browser.js b/gloo-mesh/core/byo-redis/2-6/default/tests/utils/enhance-browser.js deleted file mode 100644 index 3f4dc50c9f..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/enhance-browser.js +++ /dev/null @@ -1,90 +0,0 @@ -const fs = require('fs'); -const path = require('path'); -const { logDebug } = require('./logging'); - -function enhanceBrowser(browser, testId = 'test', shouldRecord = true) { - let recorder; - let page; - let sanitizedTestId = testId.replace(/ /g, '_'); - const downloadPath = path.resolve('./ui-test-data'); - fs.mkdirSync(downloadPath, { recursive: true }); - - async function withTimeout(promise, ms, errorMessage) { - let timeoutId; - const timeoutPromise = new Promise((_, reject) => { - timeoutId = setTimeout(() => reject(new Error(errorMessage)), ms); - }); - const result = await Promise.race([promise, timeoutPromise]); - clearTimeout(timeoutId); - return result; - } - - const enhancedBrowser = new Proxy(browser, { - get(target, prop) { - if (prop === 'newPage') { - return async function (...args) { - page = await target.newPage(...args); - await page.setViewport({ width: 1500, height: 1000 }); - if (shouldRecord) { - recorder = await page.screencast({ path: `./ui-test-data/${sanitizedTestId}-recording.webm` }); - } - return page; - }; - } else if (prop === 'close') { - return async function (...args) { - if (page) { - if (shouldRecord && recorder) { - logDebug('Stopping recorder...'); - try { - await withTimeout(recorder.stop(), 10000, 'Recorder stop timed out'); - logDebug('Recorder stopped.'); - } catch (e) { - logDebug('Failed to stop recorder:', e); - } - } - try { - logDebug('Checking if page has __DUMP_SWR_CACHE__'); - const hasDumpSWRCache = await page.evaluate(() => !!window.__DUMP_SWR_CACHE__); - if (hasDumpSWRCache) { - logDebug('Dumping SWR cache...'); - const client = await page.target().createCDPSession(); - const fileName = `${sanitizedTestId}-dump-swr-cache.txt`; - const fullDownloadPath = path.join(downloadPath, fileName); - - await client.send('Page.setDownloadBehavior', { - behavior: 'allow', - downloadPath: downloadPath, - }); - await page.evaluate(() => { - window.__DUMP_SWR_CACHE__("dump-swr-cache.txt"); - }); - - // waiting for the file to be saved - await new Promise((resolve) => setTimeout(resolve, 5000)); - fs.renameSync(path.join(downloadPath, "dump-swr-cache.txt"), fullDownloadPath); - logDebug('UI dump of SWR cache:', fullDownloadPath); - } else { - logDebug('__DUMP_SWR_CACHE__ not found on window object.'); - } - } catch (e) { - logDebug('Failed to dump SWR cache:', e); - } - } - await new Promise((resolve) => setTimeout(resolve, 2000)); - await target.close(...args); - }; - } else { - const value = target[prop]; - if (typeof value === 'function') { - return value.bind(target); - } else { - return value; - } - } - }, - }); - - return enhancedBrowser; -} - -module.exports = { enhanceBrowser }; \ No newline at end of file diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/image-ocr-processor.js b/gloo-mesh/core/byo-redis/2-6/default/tests/utils/image-ocr-processor.js deleted file mode 100644 index 6358070d59..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/image-ocr-processor.js +++ /dev/null @@ -1,174 +0,0 @@ -const Tesseract = require('tesseract.js'); -const sharp = require('sharp'); -const fs = require('fs'); -const path = require('path'); -const { logDebug } = require('../utils/logging'); - -const OUTPUT_DIR = 'extracted_text_boxes'; - -// Helper function to check if the pixel color matches the target color -function colorsMatch(pixel, targetColor, channels) { - if (channels === 4) { - return ( - pixel[0] === targetColor.r && - pixel[1] === targetColor.g && - pixel[2] === targetColor.b && - pixel[3] === 255 - ); - } else if (channels === 3) { - return ( - pixel[0] === targetColor.r && - pixel[1] === targetColor.g && - pixel[2] === targetColor.b - ); - } - return false; -} - -// Function to find bounding boxes that match the target color -async function getTextBoxBoundingBoxes(imageBuffer, width, height, channels, targetColor) { - const boundingBoxes = []; - const visited = new Array(width * height).fill(false); - const getIndex = (x, y) => y * width + x; - - for (let y = 0; y < height; y++) { - for (let x = 0; x < width; x++) { - const idx = getIndex(x, y); - if (visited[idx]) continue; - - const pixelStart = idx * channels; - const pixel = imageBuffer.slice(pixelStart, pixelStart + channels); - if (colorsMatch(pixel, targetColor, channels)) { - const queue = []; - queue.push({ x, y }); - visited[idx] = true; - - let minX = x, - maxX = x; - let minY = y, - maxY = y; - - while (queue.length > 0) { - const { x: currentX, y: currentY } = queue.shift(); - - const neighbors = [ - { x: currentX + 1, y: currentY }, - { x: currentX - 1, y: currentY }, - { x: currentX, y: currentY + 1 }, - { x: currentX, y: currentY - 1 }, - ]; - - for (const neighbor of neighbors) { - if ( - neighbor.x >= 0 && - neighbor.x < width && - neighbor.y >= 0 && - neighbor.y < height - ) { - const neighborIdx = getIndex(neighbor.x, neighbor.y); - if (!visited[neighborIdx]) { - const neighborPixelStart = neighborIdx * channels; - const neighborPixel = imageBuffer.slice( - neighborPixelStart, - neighborPixelStart + channels - ); - if (colorsMatch(neighborPixel, targetColor, channels)) { - queue.push({ x: neighbor.x, y: neighbor.y }); - visited[neighborIdx] = true; - - minX = Math.min(minX, neighbor.x); - maxX = Math.max(maxX, neighbor.x); - minY = Math.min(minY, neighbor.y); - maxY = Math.max(maxY, neighbor.y); - } - } - } - } - } - - const padding = -1; - const removePointingCaret = 6; - boundingBoxes.push({ - left: Math.max(0, Math.min(minX - padding, width - 1)), - top: Math.max(0, Math.min(minY - padding, height - 1)), - width: Math.max( - 1, - Math.min(maxX - minX + 2 * padding, width - Math.max(0, minX - padding)) - ), - height: Math.max( - 1, - Math.min(maxY - minY + 2 * padding, height - Math.max(0, minY - padding)) - ) - removePointingCaret, - }); - } - } - } - - return boundingBoxes; -} - -// Function to extract boxes from image -async function extractTextBoxes(inputImagePath, targetColor) { - const image = sharp(inputImagePath); - const metadata = await image.metadata(); - const { width, height, channels } = metadata; - - if (channels !== 3 && channels !== 4) { - throw new Error(`Unsupported number of channels: ${channels}. Only RGB and RGBA are supported.`); - } - - const { data } = await image.raw().toBuffer({ resolveWithObject: true }); - const boundingBoxes = await getTextBoxBoundingBoxes(data, width, height, channels, targetColor); - logDebug(`Found ${boundingBoxes.length} text box(es).`); - - if (!fs.existsSync(OUTPUT_DIR)) { - fs.mkdirSync(OUTPUT_DIR); - } - - const extractedImages = []; - for (let i = 0; i < boundingBoxes.length; i++) { - const image = sharp(inputImagePath); - let box = boundingBoxes[i]; - - // Skip small boxes, those are artifacts, or rediscoveries of the characters in the same box. - if (box.width < 50 && box.height < 30) { - continue; - } - - const outputPath = path.join(OUTPUT_DIR, `text_box_${i + 1}.png`); - await image.extract(box).ensureAlpha().png().toFile(outputPath); - extractedImages.push(outputPath); - } - - return extractedImages; -} - -// Extract boxes with `targetColor` and perform OCR on those. -/** - * Recognizes text from a screenshot image. - * - * @param {string} imagePath - The path to the screenshot image. - * @param {object} targetColor - The target color to extract text boxes. Default is { r: 53, g: 57, b: 59 } and it represent the service labels in the Observability graph. - * @param {string[]} expectedWords - An array of expected words to recognize. - * @returns {Promise} - A promise that resolves to an array of recognized texts. - */ -async function recognizeTextFromScreenshot(imagePath, expectedWords = [], targetColor = { r: 53, g: 57, b: 59 }) { - const whitelist = expectedWords.join('').replace(/\s+/g, ''); - const extractedImages = await extractTextBoxes(imagePath, targetColor); - - const recognizedTexts = []; - for (const image of extractedImages) { - const text = await Tesseract.recognize(image, 'eng', { - tessedit_pageseg_mode: 11, - tessedit_ocr_engine_mode: 1, - tessedit_char_whitelist: whitelist, - }).then(({ data: { text } }) => text); - recognizedTexts.push(text); - } - - return recognizedTexts; -} - -module.exports = { - recognizeTextFromScreenshot, -}; diff --git a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/logging.js b/gloo-mesh/core/byo-redis/2-6/default/tests/utils/logging.js deleted file mode 100644 index 45a6199ca3..0000000000 --- a/gloo-mesh/core/byo-redis/2-6/default/tests/utils/logging.js +++ /dev/null @@ -1,7 +0,0 @@ -const debugMode = process.env.RUNNER_DEBUG === '1'; -function logDebug(...args) { - if (debugMode) { - console.log(...args); - } -} -module.exports = { logDebug }; \ No newline at end of file