From 0b202736e7a3cd25f5dad09567c3fd797e9aa4ea Mon Sep 17 00:00:00 2001 From: Matthew Fisher Date: Tue, 13 Aug 2024 12:16:07 -0700 Subject: [PATCH 1/3] hard wrap at 100 characters This conforms with SKIP 00x - Documenting Spinkube Signed-off-by: Matthew Fisher --- content/en/_index.md | 60 ++- content/en/docs/_index.md | 13 +- content/en/docs/contrib/troubleshooting.md | 4 +- content/en/docs/contrib/writing-code.md | 3 +- content/en/docs/glossary.md | 154 ++++-- .../docs/install/azure-kubernetes-service.md | 38 +- .../en/docs/install/installing-with-helm.md | 38 +- .../docs/install/linode-kubernetes-engine.md | 90 ++-- content/en/docs/install/microk8s.md | 68 ++- content/en/docs/install/quickstart.md | 37 +- content/en/docs/install/rancher-desktop.md | 37 +- content/en/docs/install/spin-kube-plugin.md | 9 +- content/en/docs/misc/compatibility.md | 31 +- content/en/docs/misc/integrations.md | 21 +- content/en/docs/overview.md | 18 +- .../en/docs/reference/spin-app-executor.md | 3 +- content/en/docs/reference/spin-app.md | 448 ++++++++---------- content/en/docs/topics/architecture.md | 15 +- content/en/docs/topics/assigning-variables.md | 51 +- .../en/docs/topics/autoscaling/autoscaling.md | 8 +- .../topics/autoscaling/scaling-with-hpa.md | 73 ++- .../topics/autoscaling/scaling-with-keda.md | 68 ++- .../topics/connecting-to-a-sqlite-database.md | 47 +- .../topics/external-variable-providers.md | 86 +++- content/en/docs/topics/packaging.md | 59 ++- .../en/docs/topics/using-a-key-value-store.md | 50 +- 26 files changed, 988 insertions(+), 541 deletions(-) diff --git a/content/en/_index.md b/content/en/_index.md index 6d9a04c3..43cb1bcc 100644 --- a/content/en/_index.md +++ b/content/en/_index.md @@ -2,10 +2,9 @@ title: SpinKube --- -{{< blocks/cover title="Hyper-efficient serverless on Kubernetes, powered by WebAssembly." image_anchor="top" height="full" >}} - - Get Started - +{{< blocks/cover title="Hyper-efficient serverless on Kubernetes, powered by WebAssembly." +image_anchor="top" height="full" >}} Get + Started

SpinKube is an open source project that streamlines developing, deploying and operating WebAssembly workloads in Kubernetes - resulting in delivering smaller, more portable applications and incredible compute performance benefits.

{{< blocks/link-down color="info" >}} {{< /blocks/cover >}} @@ -13,7 +12,13 @@ title: SpinKube {{% blocks/lead color="secondary" %}} -SpinKube combines the Spin operator, containerd Spin shim, and the runtime class manager (formerly KWasm) open source projects with contributions from Microsoft, SUSE, Liquid Reply, and Fermyon. By running applications at the Wasm abstraction layer, SpinKube gives developers a more powerful, efficient and scalable way to optimize application delivery on Kubernetes. +SpinKube combines the Spin operator, containerd Spin shim, and the runtime class manager (formerly KWasm) open source projects with contributions from Microsoft, SUSE, +Liquid Reply, and Fermyon. By running applications at the Wasm abstraction layer, SpinKube gives +developers a more powerful, efficient and scalable way to optimize application delivery on +Kubernetes. ### Made with Contributions from: @@ -23,19 +28,36 @@ SpinKube combines the Spin o ### Overview -**Containerd Shim Spin** -The [Containerd Shim Spin repository](https://github.com/spinkube/containerd-shim-spin) provides shim implementations for running WebAssembly ([Wasm](https://webassembly.org/)) / Wasm System Interface ([WASI](https://github.com/WebAssembly/WASI)) workloads using [runwasi](https://github.com/deislabs/runwasi) as a library, whereby workloads built using the [Spin framework](https://github.com/fermyon/spin) can function similarly to container workloads in a Kubernetes environment. - -**Runtime Class Manager** -The [Runtime Class Manager, also known as the Containerd Shim Lifecycle Operator](https://github.com/spinkube/runtime-class-manager), is designed to automate and manage the lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation, update, removal, and configuration of shims, reducing manual errors and improving reliability in managing WebAssembly (Wasm) workloads and other containerd extensions. - -**Spin Plugin for Kubernetes** -The [Spin plugin for Kubernetes](https://github.com/spinkube/spin-plugin-kube) is designed to enhance Kubernetes by enabling the execution of Wasm modules directly within a Kubernetes cluster. This plugin works by integrating with containerd shims, allowing Kubernetes to manage and run Wasm workloads in a way similar to traditional container workloads. - -**Spin Operator** -The [Spin Operator](https://github.com/spinkube/spin-operator/) enables deploying Spin applications to Kubernetes. The foundation of this project is built using the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. Spin Operator defines Spin App Custom Resource Definitions (CRDs). Spin Operator watches SpinApp Custom Resources e.g. Spin app image, replicas, schedulers and other user-defined values and realizes the desired state in the Kubernetes cluster. Spin Operator introduces a host of functionality such as resource-based scaling event-driven scaling and much more. - -**Spin Trigger MQTT** -[Spin Trigger MQTT](https://github.com/spinkube/spin-trigger-mqtt/) is an MQTT trigger designed specifically for Spin. It enables seamless integration between Spin and MQTT-based systems, allowing you to automate workflows and trigger actions based on MQTT messages. +**Containerd Shim Spin** The [Containerd Shim Spin +repository](https://github.com/spinkube/containerd-shim-spin) provides shim implementations for +running WebAssembly ([Wasm](https://webassembly.org/)) / Wasm System Interface +([WASI](https://github.com/WebAssembly/WASI)) workloads using +[runwasi](https://github.com/deislabs/runwasi) as a library, whereby workloads built using the [Spin +framework](https://github.com/fermyon/spin) can function similarly to container workloads in a +Kubernetes environment. + +**Runtime Class Manager** The [Runtime Class Manager, also known as the Containerd Shim Lifecycle +Operator](https://github.com/spinkube/runtime-class-manager), is designed to automate and manage the +lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation, +update, removal, and configuration of shims, reducing manual errors and improving reliability in +managing WebAssembly (Wasm) workloads and other containerd extensions. + +**Spin Plugin for Kubernetes** The [Spin plugin for +Kubernetes](https://github.com/spinkube/spin-plugin-kube) is designed to enhance Kubernetes by +enabling the execution of Wasm modules directly within a Kubernetes cluster. This plugin works by +integrating with containerd shims, allowing Kubernetes to manage and run Wasm workloads in a way +similar to traditional container workloads. + +**Spin Operator** The [Spin Operator](https://github.com/spinkube/spin-operator/) enables deploying +Spin applications to Kubernetes. The foundation of this project is built using the +[kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. Spin Operator defines Spin +App Custom Resource Definitions (CRDs). Spin Operator watches SpinApp Custom Resources e.g. Spin app +image, replicas, schedulers and other user-defined values and realizes the desired state in the +Kubernetes cluster. Spin Operator introduces a host of functionality such as resource-based scaling +event-driven scaling and much more. + +**Spin Trigger MQTT** [Spin Trigger MQTT](https://github.com/spinkube/spin-trigger-mqtt/) is an MQTT +trigger designed specifically for Spin. It enables seamless integration between Spin and MQTT-based +systems, allowing you to automate workflows and trigger actions based on MQTT messages. {{% /blocks/lead %}} diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md index 880c845a..6c2b62be 100755 --- a/content/en/docs/_index.md +++ b/content/en/docs/_index.md @@ -17,10 +17,11 @@ where to look for certain things. - [Installation guides]({{< relref "install" >}}) cover how to install SpinKube on various platforms. -- [Topic guides]({{< relref "topics" >}}) discuss key topics and concepts at a fairly high level and provide useful background - information and explanation. -- [Reference guides]({{< relref "reference" >}}) contain technical reference for APIs and other aspects of SpinKube's machinery. - They describe how it works and how to use it but assume that you have a basic understanding of key - concepts. +- [Topic guides]({{< relref "topics" >}}) discuss key topics and concepts at a fairly high level and + provide useful background information and explanation. +- [Reference guides]({{< relref "reference" >}}) contain technical reference for APIs and other + aspects of SpinKube's machinery. They describe how it works and how to use it but assume that you + have a basic understanding of key concepts. - [Contributing guides]({{< relref "contrib" >}}) show how to contribute to the SpinKube project. -- [Miscellaneous guides]({{< relref "misc" >}}) cover topics that don't fit neatly into either of the above categories. +- [Miscellaneous guides]({{< relref "misc" >}}) cover topics that don't fit neatly into either of + the above categories. diff --git a/content/en/docs/contrib/troubleshooting.md b/content/en/docs/contrib/troubleshooting.md index bc312aef..33f62dc0 100644 --- a/content/en/docs/contrib/troubleshooting.md +++ b/content/en/docs/contrib/troubleshooting.md @@ -82,8 +82,8 @@ $ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/downloa error: error validating "https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:6443/openapi/v2?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false ``` -This is because no cluster exists. You can create a cluster following the -[Quickstart guide]({{< ref "quickstart" >}}). +This is because no cluster exists. You can create a cluster following the [Quickstart guide]({{< ref +"quickstart" >}}). ## Installation Failed diff --git a/content/en/docs/contrib/writing-code.md b/content/en/docs/contrib/writing-code.md index a8df13ff..adc99eba 100644 --- a/content/en/docs/contrib/writing-code.md +++ b/content/en/docs/contrib/writing-code.md @@ -161,4 +161,5 @@ See the guide on [writing documentation]({{< ref "writing-documentation" >}}) fo Congratulations! You've made a contribution to SpinKube. After a pull request has been submitted, it needs to be reviewed by a maintainer. Reach out on the -`#spinkube` channel on the [CNCF Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE) to ask for a review. +`#spinkube` channel on the [CNCF Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE) to ask +for a review. diff --git a/content/en/docs/glossary.md b/content/en/docs/glossary.md index 3a0981e0..7b504391 100644 --- a/content/en/docs/glossary.md +++ b/content/en/docs/glossary.md @@ -5,35 +5,71 @@ weight: 100 categories: [SpinKube] --- -The following glossary of terms is in the context of deploying, scaling, automating and managing Spin applications in containerized environments. +The following glossary of terms is in the context of deploying, scaling, automating and managing +Spin applications in containerized environments. ## Chart -A Helm chart is a package format used in Kubernetes for deploying applications. It contains all the necessary files, configurations, and dependencies required to deploy and manage an application on a Kubernetes cluster. Helm charts provide a convenient way to define, install, and upgrade complex applications in a consistent and reproducible manner. +A Helm chart is a package format used in Kubernetes for deploying applications. It contains all the +necessary files, configurations, and dependencies required to deploy and manage an application on a +Kubernetes cluster. Helm charts provide a convenient way to define, install, and upgrade complex +applications in a consistent and reproducible manner. ## Cluster -A Kubernetes cluster is a group of nodes (servers) that work together to run containerized applications. It consists of a control plane and worker nodes. The control plane manages and orchestrates the cluster, while the worker nodes host the containers. The control plane includes components like the API server, scheduler, and controller manager. The worker nodes run the containers using container runtime engines like Docker. Kubernetes clusters provide scalability, high availability, and automated management of containerized applications in a distributed environment. +A Kubernetes cluster is a group of nodes (servers) that work together to run containerized +applications. It consists of a control plane and worker nodes. The control plane manages and +orchestrates the cluster, while the worker nodes host the containers. The control plane includes +components like the API server, scheduler, and controller manager. The worker nodes run the +containers using container runtime engines like Docker. Kubernetes clusters provide scalability, +high availability, and automated management of containerized applications in a distributed +environment. ## Container Runtime -A container runtime is a software that manages the execution of containers. It is responsible for starting, stopping, and managing the lifecycle of containers. Container runtimes interact with the underlying operating system to provide isolation and resource management for containers. They also handle networking, storage, and security aspects of containerization. Popular container runtimes include Docker, containerd, and CRI-O. They enable the deployment and management of containerized applications, allowing developers to package their applications with all the necessary dependencies and run them consistently across different environments. +A container runtime is a software that manages the execution of containers. It is responsible for +starting, stopping, and managing the lifecycle of containers. Container runtimes interact with the +underlying operating system to provide isolation and resource management for containers. They also +handle networking, storage, and security aspects of containerization. Popular container runtimes +include Docker, containerd, and CRI-O. They enable the deployment and management of containerized +applications, allowing developers to package their applications with all the necessary dependencies +and run them consistently across different environments. ## Controller -A Controller is a core component responsible for managing the desired state of a specific resource or set of resources. It continuously monitors the cluster and takes actions to ensure that the actual state matches the desired state. Controllers handle tasks such as creating, updating, and deleting resources, as well as reconciling any discrepancies between the current and desired states. They provide automation and self-healing capabilities, ensuring that the cluster remains in the desired state even in the presence of failures or changes. Controllers play a crucial role in maintaining the stability and reliability of Kubernetes deployments. +A Controller is a core component responsible for managing the desired state of a specific resource +or set of resources. It continuously monitors the cluster and takes actions to ensure that the +actual state matches the desired state. Controllers handle tasks such as creating, updating, and +deleting resources, as well as reconciling any discrepancies between the current and desired states. +They provide automation and self-healing capabilities, ensuring that the cluster remains in the +desired state even in the presence of failures or changes. Controllers play a crucial role in +maintaining the stability and reliability of Kubernetes deployments. ## Custom Resource (CR) -In the context of Kubernetes, a Custom Resource (CR) is an extension mechanism that allows users to define and manage their own API resources. It enables the creation of new resource types that are specific to an application or workload. Custom Resources are defined using Custom Resource Definitions (CRDs) and can be treated and managed like any other Kubernetes resource. They provide a way to extend the Kubernetes API and enable the development of custom controllers to handle the lifecycle and behavior of these resources. Custom Resources allow for greater flexibility and customization in Kubernetes deployments. +In the context of Kubernetes, a Custom Resource (CR) is an extension mechanism that allows users to +define and manage their own API resources. It enables the creation of new resource types that are +specific to an application or workload. Custom Resources are defined using Custom Resource +Definitions (CRDs) and can be treated and managed like any other Kubernetes resource. They provide a +way to extend the Kubernetes API and enable the development of custom controllers to handle the +lifecycle and behavior of these resources. Custom Resources allow for greater flexibility and +customization in Kubernetes deployments. ## Custom Resource Definition (CRD) -A Custom Resource Definition (CRD) is an extension mechanism that allows users to define their own custom resources. It enables the creation of new resource types with specific schemas and behaviors. CRDs define the structure and validation rules for custom resources, allowing users to store and manage additional information beyond the built-in Kubernetes resources. Once a CRD is created, instances of the custom resource can be created, updated, and deleted using the Kubernetes API. CRDs provide a way to extend Kubernetes and tailor it to specific application requirements. +A Custom Resource Definition (CRD) is an extension mechanism that allows users to define their own +custom resources. It enables the creation of new resource types with specific schemas and behaviors. +CRDs define the structure and validation rules for custom resources, allowing users to store and +manage additional information beyond the built-in Kubernetes resources. Once a CRD is created, +instances of the custom resource can be created, updated, and deleted using the Kubernetes API. CRDs +provide a way to extend Kubernetes and tailor it to specific application requirements. ## SpinApp CRD -The SpinApp CRD is a Kubernetes resource that extends the functionality of the Kubernetes API to support Spin applications. It defines a custom resource called "SpinApp" that encapsulates all the necessary information to deploy and manage a Spin application within a Kubernetes cluster. The SpinApp CRD consists of several key fields that define the desired state of a Spin application. +The SpinApp CRD is a Kubernetes resource that extends the functionality of the Kubernetes API to +support Spin applications. It defines a custom resource called "SpinApp" that encapsulates all the +necessary information to deploy and manage a Spin application within a Kubernetes cluster. The +SpinApp CRD consists of several key fields that define the desired state of a Spin application. Here's an example of a SpinApp custom resource that uses the SpinApp CRD schema: @@ -48,9 +84,11 @@ spec: executor: "containerd-shim-spin" ``` -> SpinApp CRDs are kept separate from Helm. If using Helm, CustomResourceDefinition (CRD) resources must be installed prior to installing the Helm chart. +> SpinApp CRDs are kept separate from Helm. If using Helm, CustomResourceDefinition (CRD) resources +> must be installed prior to installing the Helm chart. -You can modify the example above to customize the SpinApp via a YAML file. Here's an updated YAML file with additional customization options: +You can modify the example above to customize the SpinApp via a YAML file. Here's an updated YAML +file with additional customization options: ```yaml apiVersion: core.spinoperator.dev/v1alpha1 @@ -83,13 +121,19 @@ spec: In this updated example, we have added additional customization options: -- `imagePullSecrets`: An optional field that lets you reference a Kubernetes secret that has credentials for you to pull in images from a private registry. -- `serviceAnnotations`: An optional field that lets you set specific annotations on the underlying service that is created. -- `podAnnotations`: An optional field that lets you set specific annotations on the underlying pods that are created. -- `resources`: You can specify resource limits and requests for CPU and memory. Adjust the values according to your application's resource requirements. -- `env`: You can define environment variables for your SpinApp. Add as many environment variables as needed, providing the name and value for each. +- `imagePullSecrets`: An optional field that lets you reference a Kubernetes secret that has + credentials for you to pull in images from a private registry. +- `serviceAnnotations`: An optional field that lets you set specific annotations on the underlying + service that is created. +- `podAnnotations`: An optional field that lets you set specific annotations on the underlying pods + that are created. +- `resources`: You can specify resource limits and requests for CPU and memory. Adjust the values + according to your application's resource requirements. +- `env`: You can define environment variables for your SpinApp. Add as many environment variables as + needed, providing the name and value for each. -To apply the changes, save the YAML file (e.g. `updated-spinapp.yaml`) and then apply it to your Kubernetes cluster using the following command: +To apply the changes, save the YAML file (e.g. `updated-spinapp.yaml`) and then apply it to your +Kubernetes cluster using the following command: ```bash kubectl apply -f updated-spinapp.yaml @@ -97,53 +141,99 @@ kubectl apply -f updated-spinapp.yaml ## Helm -Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It uses charts, which are pre-configured templates, to define the structure and configuration of an application. Helm allows users to easily install, upgrade, and uninstall applications on a Kubernetes cluster. It also supports versioning, dependency management, and customization of deployments. Helm charts can be shared and reused, making it a convenient tool for managing complex applications in a Kubernetes environment. +Helm is a package manager for Kubernetes that simplifies the deployment and management of +applications. It uses charts, which are pre-configured templates, to define the structure and +configuration of an application. Helm allows users to easily install, upgrade, and uninstall +applications on a Kubernetes cluster. It also supports versioning, dependency management, and +customization of deployments. Helm charts can be shared and reused, making it a convenient tool for +managing complex applications in a Kubernetes environment. ## Image -In the context of Kubernetes, an image refers to a packaged and executable software artifact that contains all the necessary dependencies and configurations to run a specific application or service. It is typically built from a Dockerfile and stored in a container registry. Images are used as the basis for creating containers, which are lightweight and isolated runtime environments. Kubernetes pulls the required images from the registry and deploys them onto the cluster's worker nodes. Images play a crucial role in ensuring consistent and reproducible deployments of applications in Kubernetes. +In the context of Kubernetes, an image refers to a packaged and executable software artifact that +contains all the necessary dependencies and configurations to run a specific application or service. +It is typically built from a Dockerfile and stored in a container registry. Images are used as the +basis for creating containers, which are lightweight and isolated runtime environments. Kubernetes +pulls the required images from the registry and deploys them onto the cluster's worker nodes. Images +play a crucial role in ensuring consistent and reproducible deployments of applications in +Kubernetes. ## Kubernetes -Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a framework for running and coordinating containers across a cluster of nodes. Kubernetes abstracts the underlying infrastructure and provides features like load balancing, service discovery, and self-healing capabilities. It enables organizations to efficiently manage and scale their applications, ensuring high availability and resilience. +Kubernetes is an open-source container orchestration platform that automates the deployment, +scaling, and management of containerized applications. It provides a framework for running and +coordinating containers across a cluster of nodes. Kubernetes abstracts the underlying +infrastructure and provides features like load balancing, service discovery, and self-healing +capabilities. It enables organizations to efficiently manage and scale their applications, ensuring +high availability and resilience. ## Open Container Initiative (OCI) -The Open Container Initiative (OCI) is an open governance structure and project that aims to create industry standards for container formats and runtime. It was formed to ensure compatibility and interoperability between different container technologies. OCI defines specifications for container images and runtime, which are used by container runtimes like Docker and containerd. These specifications provide a common framework for packaging and running containers, allowing users to build and distribute container images that can be executed on any OCI-compliant runtime. OCI plays a crucial role in promoting portability and standardization in the container ecosystem. +The Open Container Initiative (OCI) is an open governance structure and project that aims to create +industry standards for container formats and runtime. It was formed to ensure compatibility and +interoperability between different container technologies. OCI defines specifications for container +images and runtime, which are used by container runtimes like Docker and containerd. These +specifications provide a common framework for packaging and running containers, allowing users to +build and distribute container images that can be executed on any OCI-compliant runtime. OCI plays a +crucial role in promoting portability and standardization in the container ecosystem. ## Pod -A Pod is the smallest and most basic unit of deployment. It represents a single instance of a running process in a cluster. A Pod can contain one or more containers that are tightly coupled and share the same resources, such as network and storage. Containers within a Pod are scheduled and deployed together on the same node. Pods are ephemeral and can be created, deleted, or replaced dynamically. They provide a way to encapsulate and manage the lifecycle of containerized applications in Kubernetes. +A Pod is the smallest and most basic unit of deployment. It represents a single instance of a +running process in a cluster. A Pod can contain one or more containers that are tightly coupled and +share the same resources, such as network and storage. Containers within a Pod are scheduled and +deployed together on the same node. Pods are ephemeral and can be created, deleted, or replaced +dynamically. They provide a way to encapsulate and manage the lifecycle of containerized +applications in Kubernetes. ## Role Based Access Control (RBAC) -Role-Based Access Control (RBAC) is a security mechanism in Kubernetes that provides fine-grained control over access to cluster resources. RBAC allows administrators to define roles and permissions for users or groups, granting or restricting access to specific operations and resources within the cluster. RBAC ensures that only authorized users can perform certain actions, helping to enforce security policies and prevent unauthorized access to sensitive resources. It enhances the overall security and governance of Kubernetes clusters. +Role-Based Access Control (RBAC) is a security mechanism in Kubernetes that provides fine-grained +control over access to cluster resources. RBAC allows administrators to define roles and permissions +for users or groups, granting or restricting access to specific operations and resources within the +cluster. RBAC ensures that only authorized users can perform certain actions, helping to enforce +security policies and prevent unauthorized access to sensitive resources. It enhances the overall +security and governance of Kubernetes clusters. ## Runtime Class -A Runtime Class is a resource that allows users to specify different container runtimes for running their workloads. It provides a way to define and select the runtime environment in which a Pod should be executed. By using Runtime Classes, users can choose between different container runtimes, based on their specific requirements. This flexibility enables the deployment of workloads with different runtime characteristics, allowing for better resource utilization and performance optimization in Kubernetes clusters. +A Runtime Class is a resource that allows users to specify different container runtimes for running +their workloads. It provides a way to define and select the runtime environment in which a Pod +should be executed. By using Runtime Classes, users can choose between different container runtimes, +based on their specific requirements. This flexibility enables the deployment of workloads with +different runtime characteristics, allowing for better resource utilization and performance +optimization in Kubernetes clusters. ## Scheduler -A scheduler is a component responsible for assigning Pods to nodes in the cluster. It takes into account factors like resource availability, node capacity, and any defined scheduling constraints or policies. The scheduler ensures that Pods are placed on suitable nodes to optimize resource utilization and maintain high availability. It considers factors such as affinity, anti-affinity, and resource requirements when making scheduling decisions. The scheduler continuously monitors the cluster and makes adjustments as needed to maintain the desired state of the workload distribution. +A scheduler is a component responsible for assigning Pods to nodes in the cluster. It takes into +account factors like resource availability, node capacity, and any defined scheduling constraints or +policies. The scheduler ensures that Pods are placed on suitable nodes to optimize resource +utilization and maintain high availability. It considers factors such as affinity, anti-affinity, +and resource requirements when making scheduling decisions. The scheduler continuously monitors the +cluster and makes adjustments as needed to maintain the desired state of the workload distribution. ## Service -In Kubernetes, a Service is an abstraction that defines a logical set of Pods that enables clients to interact with a consistent set of Pods, regardless of whether the code is designed for a cloud-native environment or a containerized legacy application. +In Kubernetes, a Service is an abstraction that defines a logical set of Pods that enables clients +to interact with a consistent set of Pods, regardless of whether the code is designed for a +cloud-native environment or a containerized legacy application. ## Spin -Spin is a framework designed for building and running event-driven microservice applications using WebAssembly (Wasm) components. +Spin is a framework designed for building and running event-driven microservice applications using +WebAssembly (Wasm) components. ## `SpinApp` Manifest The goal of the `SpinApp` manifest is twofold: - to represent the possible options for configuring a Wasm workload running in Kubernetes -- to simplify and abstract the internals of _how_ that Wasm workload is executed, while - allowing the user to configure it to their needs +- to simplify and abstract the internals of _how_ that Wasm workload is executed, while allowing the + user to configure it to their needs -As a result, the simplest `SpinApp` manifest only requires the registry reference to create a deployment, pod, and service with the right Wasm executor. +As a result, the simplest `SpinApp` manifest only requires the registry reference to create a +deployment, pod, and service with the right Wasm executor. However, the `SpinApp` manifest currently supports configuring options such as: @@ -156,8 +246,10 @@ However, the `SpinApp` manifest currently supports configuring options such as: ## Spin App Executor (CRD) -The `SpinAppExecutor` CRD is a [Custom Resource Definition](#custom-resource-definition-crd) utilized by Spin Operator to determine which executor type should be used in running a SpinApp. +The `SpinAppExecutor` CRD is a [Custom Resource Definition](#custom-resource-definition-crd) +utilized by Spin Operator to determine which executor type should be used in running a SpinApp. ## Spin Operator -Spin Operator is a Kubernetes operator in charge of handling the lifecycle of Spin applications based on their SpinApp resources. +Spin Operator is a Kubernetes operator in charge of handling the lifecycle of Spin applications +based on their SpinApp resources. diff --git a/content/en/docs/install/azure-kubernetes-service.md b/content/en/docs/install/azure-kubernetes-service.md index 75e33d64..24c75f2e 100644 --- a/content/en/docs/install/azure-kubernetes-service.md +++ b/content/en/docs/install/azure-kubernetes-service.md @@ -8,7 +8,8 @@ aliases: - /docs/spin-operator/tutorials/deploy-on-azure-kubernetes-service --- -In this tutorial, you install Spin Operator on an Azure Kubernetes Service (AKS) cluster and deploy a simple Spin application. You will learn how to: +In this tutorial, you install Spin Operator on an Azure Kubernetes Service (AKS) cluster and deploy +a simple Spin application. You will learn how to: - Deploy an AKS cluster - Install Spin Operator Custom Resource Definition and Runtime Class @@ -23,11 +24,15 @@ Please ensure you have the following tools installed before continuing: - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI - [Helm](https://helm.sh) - the package manager for Kubernetes -- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) - cross-platform CLI for managing Azure resources +- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) - cross-platform CLI + for managing Azure resources ## Provisioning the necessary Azure Infrastructure -Before you dive into deploying Spin Operator on Azure Kubernetes Service (AKS), the underlying cloud infrastructure must be provisioned. For the sake of this article, you will provision a simple AKS cluster. (Alternatively, you can setup the AKS cluster following [this guide from Microsoft](https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli).) +Before you dive into deploying Spin Operator on Azure Kubernetes Service (AKS), the underlying cloud +infrastructure must be provisioned. For the sake of this article, you will provision a simple AKS +cluster. (Alternatively, you can setup the AKS cluster following [this guide from +Microsoft](https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli).) ```shell # Login with Azure CLI @@ -49,7 +54,8 @@ az aks create --name aks-spin-operator \ --generate-ssh-keys ``` -Once the AKS cluster has been provisioned, use the `aks get-credentials` command to download credentials for `kubectl`: +Once the AKS cluster has been provisioned, use the `aks get-credentials` command to download +credentials for `kubectl`: ```shell # Download credentials for kubectl @@ -72,7 +78,9 @@ kube-system Active 3m ## Deploying the Spin Operator -First, the [Custom Resource Definition (CRD)]({{< ref "glossary#custom-resource-definition-crd" >}}) and the [Runtime Class]({{< ref "glossary#runtime-class" >}}) for `wasmtime-spin-v2` must be installed. +First, the [Custom Resource Definition (CRD)]({{< ref "glossary#custom-resource-definition-crd" >}}) +and the [Runtime Class]({{< ref "glossary#runtime-class" >}}) for `wasmtime-spin-v2` must be +installed. ```shell # Install the CRDs @@ -82,7 +90,9 @@ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0. kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.runtime-class.yaml ``` -The following installs [cert-manager](https://github.com/cert-manager/cert-manager) which is required to automatically provision and manage TLS certificates (used by the admission webhook system of Spin Operator) +The following installs [cert-manager](https://github.com/cert-manager/cert-manager) which is +required to automatically provision and manage TLS certificates (used by the admission webhook +system of Spin Operator) ```shell # Install cert-manager CRDs @@ -99,7 +109,8 @@ helm install cert-manager jetstack/cert-manager \ --version v1.14.3 ``` -The Spin Operator chart also has a dependency on [Kwasm](https://kwasm.sh/), which you use to install `containerd-wasm-shim` on the Kubernetes node(s): +The Spin Operator chart also has a dependency on [Kwasm](https://kwasm.sh/), which you use to +install `containerd-wasm-shim` on the Kubernetes node(s): @@ -130,7 +141,8 @@ kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator {"level":"info","time":"2024-02-12T11:24:00Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is Completed. Happy WASMing"} ``` -The following installs the chart with the release name `spin-operator` in the `spin-operator` namespace: +The following installs the chart with the release name `spin-operator` in the `spin-operator` +namespace: ```shell helm install spin-operator \ @@ -149,7 +161,9 @@ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0. ## Deploying a Spin App to AKS -To validate the Spin Operator deployment, you will deploy a simple Spin App to the AKS cluster. The following command will install a simple Spin App using the `SpinApp` CRD you provisioned in the previous section: +To validate the Spin Operator deployment, you will deploy a simple Spin App to the AKS cluster. The +following command will install a simple Spin App using the `SpinApp` CRD you provisioned in the +previous section: ```shell # Deploy a sample Spin app @@ -158,7 +172,8 @@ kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/c ## Verifying the Spin App -Configure port forwarding from port `8080` of your local machine to port `80` of the Kubernetes service which points to the Spin App you installed in the previous section: +Configure port forwarding from port `8080` of your local machine to port `80` of the Kubernetes +service which points to the Spin App you installed in the previous section: ```shell kubectl port-forward services/simple-spinapp 8080:80 @@ -166,7 +181,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -Send a HTTP request to [http://127.0.0.1:8080/hello](http://127.0.0.1:8080/hello) using [`curl`](https://curl.se/): +Send a HTTP request to [http://127.0.0.1:8080/hello](http://127.0.0.1:8080/hello) using +[`curl`](https://curl.se/): ```shell # Send an HTTP GET request to the Spin App diff --git a/content/en/docs/install/installing-with-helm.md b/content/en/docs/install/installing-with-helm.md index 37775f40..7e6c1b6f 100644 --- a/content/en/docs/install/installing-with-helm.md +++ b/content/en/docs/install/installing-with-helm.md @@ -16,20 +16,25 @@ For this guide in particular, you will need: ## Install Spin Operator With Helm -The following instructions are for installing Spin Operator using a Helm chart (using `helm install`). +The following instructions are for installing Spin Operator using a Helm chart (using `helm +install`). ### Prepare the Cluster Before installing the chart, you'll need to ensure the following are installed: -- [cert-manager](https://github.com/cert-manager/cert-manager) to automatically provision and manage TLS certificates (used by spin-operator's admission webhook system). For detailed installation instructions see [the cert-manager documentation](https://cert-manager.io/docs/installation/). +- [cert-manager](https://github.com/cert-manager/cert-manager) to automatically provision and manage + TLS certificates (used by spin-operator's admission webhook system). For detailed installation + instructions see [the cert-manager documentation](https://cert-manager.io/docs/installation/). ```shell kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.crds.yaml kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml ``` -- [Kwasm Operator](https://github.com/kwasm/kwasm-operator) is required to install WebAssembly shims on Kubernetes nodes that don't already include them. Note that in the future this will be replaced by [runtime class manager]({{< ref "architecture#runtime-class-manager" >}}). +- [Kwasm Operator](https://github.com/kwasm/kwasm-operator) is required to install WebAssembly shims + on Kubernetes nodes that don't already include them. Note that in the future this will be replaced + by [runtime class manager]({{< ref "architecture#runtime-class-manager" >}}). ```shell # Add Helm repository if not already done @@ -48,28 +53,29 @@ kubectl annotate node --all kwasm.sh/kwasm-node=true ## Chart prerequisites -Now we have our dependencies installed, we can start installing the operator. -This involves a couple of steps that allow for further customization of Spin -Applications in the cluster over time, but here we install the defaults. +Now we have our dependencies installed, we can start installing the operator. This involves a couple +of steps that allow for further customization of Spin Applications in the cluster over time, but +here we install the defaults. -- First ensure the [Custom Resource Definitions (CRD)]({{< ref "glossary#custom-resource-definition-crd" >}}) are installed. This includes the SpinApp CRD representing Spin applications to be scheduled on the cluster. +- First ensure the [Custom Resource Definitions (CRD)]({{< ref + "glossary#custom-resource-definition-crd" >}}) are installed. This includes the SpinApp CRD + representing Spin applications to be scheduled on the cluster. ```shell kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.crds.yaml ``` -- Next we create a [RuntimeClass]({{< ref "glossary#runtime-class" >}}) that -points to the `spin` handler called `wasmtime-spin-v2`. If you -are deploying to a production cluster that only has a shim on a subset of nodes, -you'll need to modify the RuntimeClass with a `nodeSelector:`: +- Next we create a [RuntimeClass]({{< ref "glossary#runtime-class" >}}) that points to the `spin` +handler called `wasmtime-spin-v2`. If you are deploying to a production cluster that only has a shim +on a subset of nodes, you'll need to modify the RuntimeClass with a `nodeSelector:`: ```shell kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.runtime-class.yaml ``` - Finally, we create a `containerd-spin-shim` [SpinAppExecutor]({{< ref - "glossary#spin-app-executor-crd" >}}). This tells the Spin Operator to use the - RuntimeClass we just created to run Spin Apps: + "glossary#spin-app-executor-crd" >}}). This tells the Spin Operator to use the RuntimeClass we + just created to run Spin Apps: ```shell kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.shim-executor.yaml @@ -92,7 +98,8 @@ helm install spin-operator \ ### Upgrading the Chart -Note that you may also need to upgrade the spin-operator CRDs in tandem with upgrading the Helm release: +Note that you may also need to upgrade the spin-operator CRDs in tandem with upgrading the Helm +release: ```shell kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.crds.yaml @@ -120,7 +127,8 @@ helm delete spin-operator --namespace spin-operator This will remove all Kubernetes resources associated with the chart and deletes the Helm release. -To completely uninstall all resources related to spin-operator, you may want to delete the corresponding CRD resources and the RuntimeClass: +To completely uninstall all resources related to spin-operator, you may want to delete the +corresponding CRD resources and the RuntimeClass: ```shell kubectl delete -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.shim-executor.yaml diff --git a/content/en/docs/install/linode-kubernetes-engine.md b/content/en/docs/install/linode-kubernetes-engine.md index 7dd2a076..bae8dd16 100644 --- a/content/en/docs/install/linode-kubernetes-engine.md +++ b/content/en/docs/install/linode-kubernetes-engine.md @@ -5,38 +5,53 @@ date: 2024-07-23 tags: [Installation] --- -This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes Engine (LKE). +This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes +Engine (LKE). ## Prerequisites -This guide assumes that you have an Akamai Linode account that is configured and has sufficient permissions for creating a new LKE cluster. +This guide assumes that you have an Akamai Linode account that is configured and has sufficient +permissions for creating a new LKE cluster. You will also need recent versions of `kubectl` and `helm` installed on your system. ## Creating an LKE Cluster -LKE has a managed control plane, so you only need to create the pool of worker nodes. In this tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should be fine for installing SpinKube and running up to around 100 Spin apps. +LKE has a managed control plane, so you only need to create the pool of worker nodes. In this +tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should +be fine for installing SpinKube and running up to around 100 Spin apps. -You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because containers consume substantially more resources than Spin apps do. +You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because +containers consume substantially more resources than Spin apps do. -In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click `Create Cluster`. +In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click +`Create Cluster`. ![LKE Creation Screen Described Below](../lke-spinkube-create.png) You will only need to make a few choices on this screen. Here's what we have done: -* We named the cluster `spinkube-lke-1`. You should name it according to whatever convention you prefer +* We named the cluster `spinkube-lke-1`. You should name it according to whatever convention you + prefer * We chose the `Chicago, IL (us-ord)` region, but you can choose any region you prefer * The latest supported Kubernetes version is `1.30`, so we chose that -* For this testing cluster, we chose `No` on `HA Control Plane` because we do not need high availability -* In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a cluster running more than one node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node cluster. Click `Add` to add these, and ignore the warning about minimum sizes. +* For this testing cluster, we chose `No` on `HA Control Plane` because we do not need high + availability +* In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a cluster running more than one + node. Two nodes is sufficient for Spin apps, though you may prefer the more traditional 3 node + cluster. Click `Add` to add these, and ignore the warning about minimum sizes. Once you have set things to your liking, press `Create Cluster`. -This will take you to a screen that shows the status of the cluster. Initially, you will want to wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the `kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`. +This will take you to a screen that shows the status of the cluster. Initially, you will want to +wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the +`kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`. -> The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not share that file or put it in a public place. +> The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not +> share that file or put it in a public place. -For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as your main Kubernetes configuration file. The best way to do that is to set the environment variable `KUBECONFIG` to point to that file: +For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as +your main Kubernetes configuration file. The best way to do that is to set the environment variable +`KUBECONFIG` to point to that file: ```console $ export KUBECONFIG=/path/to/spinkube-lke-1-kubeconfig.yaml @@ -67,25 +82,34 @@ users: token: REDACTED ``` -This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with what you see on your Akamai Linode dashboard. +This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with +what you see on your Akamai Linode dashboard. ## Install SpinKube Using Helm -At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG` environment variable is pointed at the correct cluster, the installation method documented there will work. +At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG` +environment variable is pointed at the correct cluster, the installation method documented there +will work. Once you are done following the installation steps, return here to install a first app. ## Creating a First App -We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes` to install. +We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and +the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes` +to install. -We'll point to an existing Spin app, a [Hello World program written in Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in GitHub Container Registry (GHCR): +We'll point to an existing Spin app, a [Hello World program written in +Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in +GitHub Container Registry (GHCR): ```console $ spin kube scaffold --from ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0 > hello-world.yaml ``` -> Note that Spin apps, which are WebAssembly, can be [stored in most container registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not Docker containers. +> Note that Spin apps, which are WebAssembly, can be [stored in most container +> registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not +> Docker containers. This will write the following to `hello-world.yaml`: @@ -107,10 +131,11 @@ $ kubectl apply -f hello-world.yaml spinapp.core.spinoperator.dev/spin-rust-hello created ``` -With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl get pods`: +With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl +get pods`: ```console -$ kubectl get pods +$ kubectl get pods NAME READY STATUS RESTARTS AGE spin-rust-hello-f6d8fc894-7pq7k 1/1 Running 0 54s spin-rust-hello-f6d8fc894-vmsgh 1/1 Running 0 54s @@ -120,7 +145,9 @@ Status is listed as `Running`, which means our app is ready. ## Making An App Public with a NodeBalancer -By default, Spin apps will be deployed with an internal service. But with Linode, you can provision a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service` object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us: +By default, Spin apps will be deployed with an internal service. But with Linode, you can provision +a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service` +object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us: ```yaml apiVersion: v1 @@ -143,9 +170,11 @@ spec: sessionAffinity: None ``` -When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for you. +When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for +you. -> You can customize this for your app simply by replacing all instances of `spin-rust-hello` with the name of your app. +> You can customize this for your app simply by replacing all instances of `spin-rust-hello` with +> the name of your app. We can create the NodeBalancer by running `kubectl apply` on the above file: @@ -154,7 +183,8 @@ $ kubectl apply -f hello-world-nodebalancer.yaml service/spin-rust-hello-nodebalancer created ``` -Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using `kubectl get service spin-rust-hello-nodebalancer`: +Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using +`kubectl get service spin-rust-hello-nodebalancer`: ```console $ get service spin-rust-hello-nodebalancer @@ -162,10 +192,12 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP spin-rust-hello-nodebalancer LoadBalancer 10.128.235.253 172.234.210.123 80:31083/TCP 40s ``` -The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your browser. +The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this +out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your +browser. ```console -$ curl 172.234.210.123/hello +$ curl 172.234.210.123/hello Hello world from Spin! ``` @@ -176,10 +208,14 @@ To delete this sample app, we will first delete the NodeBalancer, and then delet ```console $ kubectl delete service spin-rust-hello-nodebalancer service "spin-rust-hello-nodebalancer" deleted -$ kubectl delete spinapp spin-rust-hello +$ kubectl delete spinapp spin-rust-hello spinapp.core.spinoperator.dev "spin-rust-hello" deleted ``` -> If you delete the NodeBalancer out of the Linode console, it will not automatically delete the `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl delete service` to delete your NodeBalancer. +> If you delete the NodeBalancer out of the Linode console, it will not automatically delete the +> `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl +> delete service` to delete your NodeBalancer. -If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of your worker nodes and deprovision the control plane. +If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai +Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of +your worker nodes and deprovision the control plane. diff --git a/content/en/docs/install/microk8s.md b/content/en/docs/install/microk8s.md index 1a5fb279..762bb5cf 100644 --- a/content/en/docs/install/microk8s.md +++ b/content/en/docs/install/microk8s.md @@ -9,12 +9,16 @@ This guide walks through the process of installing and configuring Microk8s and ## Prerequisites -This guide assumes you are running Ubuntu 24.04, and that you have Snap enabled (which is the default). +This guide assumes you are running Ubuntu 24.04, and that you have Snap enabled (which is the +default). -> The testing platform for this installation was an Akamai Edge Linode running 4G of memory and 2 cores. +> The testing platform for this installation was an Akamai Edge Linode running 4G of memory and 2 +> cores. ## Installing Spin -You will need to [install Spin](https://developer.fermyon.com/spin/quickstart). The easiest way is to just use the following one-liner to get the latest version of Spin: + +You will need to [install Spin](https://developer.fermyon.com/spin/quickstart). The easiest way is +to just use the following one-liner to get the latest version of Spin: ```console { data-plausible="copy-quick-deploy-sample" } $ curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash @@ -26,11 +30,14 @@ Typically you will then want to move `spin` to `/usr/local/bin` or somewhere els $ sudo mv spin /usr/local/bin/spin ``` -You can test that it's on your `$PATH` with `which spin`. If this returns blank, you will need to adjust your `$PATH` variable or put Spin somewhere that is already on `$PATH`. +You can test that it's on your `$PATH` with `which spin`. If this returns blank, you will need to +adjust your `$PATH` variable or put Spin somewhere that is already on `$PATH`. ## A Script To Do This -If you would rather work with a shell script, you may find [this Gist](https://gist.github.com/kate-goldenring/47950ccb30be2fa0180e276e82ac3593#file-spinkube-on-microk8s-sh) a great place to start. It installs Microk8s and SpinKube, and configures both. +If you would rather work with a shell script, you may find [this +Gist](https://gist.github.com/kate-goldenring/47950ccb30be2fa0180e276e82ac3593#file-spinkube-on-microk8s-sh) +a great place to start. It installs Microk8s and SpinKube, and configures both. ## Installing Microk8s on Ubuntu @@ -40,7 +47,9 @@ Use `snap` to install microk8s: $ sudo snap install microk8s --classic ``` -This will install Microk8s and start it. You may want to read the [official installation instructions](https://microk8s.io/docs/getting-started) before proceeding. Wait for a moment or two, and then ensure Microk8s is running with the `microk8s status` command. +This will install Microk8s and start it. You may want to read the [official installation +instructions](https://microk8s.io/docs/getting-started) before proceeding. Wait for a moment or two, +and then ensure Microk8s is running with the `microk8s status` command. Next, enable the TLS certificate manager: @@ -52,7 +61,9 @@ Now we’re ready to install the SpinKube environment for running Spin applicati ### Installing SpinKube -SpinKube provides the entire toolkit for running Spin serverless apps. You may want to familiarize yourself with the [SpinKube quickstart](https://www.spinkube.dev/docs/install/quickstart/) guide before proceeding. +SpinKube provides the entire toolkit for running Spin serverless apps. You may want to familiarize +yourself with the [SpinKube quickstart](https://www.spinkube.dev/docs/install/quickstart/) guide +before proceeding. First, we need to apply a runtime class and a CRD for SpinKube: @@ -72,7 +83,8 @@ $ microk8s kubectl annotate node --all kwasm.sh/kwasm-node=true ``` -> The last line above tells Microk8s that all nodes on the cluster (which is just one node in this case) can run Spin applications. +> The last line above tells Microk8s that all nodes on the cluster (which is just one node in this +> case) can run Spin applications. Next, we need to install SpinKube’s operator using Helm (which is included with Microk8s). @@ -81,7 +93,8 @@ $ microk8s helm install spin-operator --namespace spin-operator --create-namespa ``` -Now we have the main operator installed. There is just one more step. We need to install the shim executor, which is a special CRD that allows us to use multiple executors for WebAssembly. +Now we have the main operator installed. There is just one more step. We need to install the shim +executor, which is a special CRD that allows us to use multiple executors for WebAssembly. ```console { data-plausible="copy-quick-deploy-sample" } $ microk8s kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.shim-executor.yaml @@ -94,7 +107,8 @@ Now SpinKube is installed! Next, we can run a simple Spin application inside of Microk8s. -While we could write regular deployments or pod specifications, the easiest way to deploy a Spin app is by creating a simple `SpinApp` resource. Let's use the simple example from SpinKube: +While we could write regular deployments or pod specifications, the easiest way to deploy a Spin app +is by creating a simple `SpinApp` resource. Let's use the simple example from SpinKube: ```console { data-plausible="copy-quick-deploy-sample" } $ microk8s kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml @@ -112,9 +126,11 @@ spec: executor: containerd-shim-spin ``` -You can read up on the definition [in the documentation](https://www.spinkube.dev/docs/reference/spin-app/). +You can read up on the definition [in the +documentation](https://www.spinkube.dev/docs/reference/spin-app/). -It may take a moment or two to get started, but you should be able to see the app with `microk8s kubectl get pods`. +It may take a moment or two to get started, but you should be able to see the app with `microk8s +kubectl get pods`. ```console { data-plausible="copy-quick-deploy-sample" } $ microk8s kubectl get po @@ -124,7 +140,9 @@ simple-spinapp-5c7b66f576-9v9fd 1/1 Running 0 45m ### Troubleshooting -If `STATUS` gets stuck in `ContainerCreating`, it is possible that KWasm did not install correctly. Try doing a `microk8s stop`, waiting a few minutes, and then running `microk8s start`. You can also try the command: +If `STATUS` gets stuck in `ContainerCreating`, it is possible that KWasm did not install correctly. +Try doing a `microk8s stop`, waiting a few minutes, and then running `microk8s start`. You can also +try the command: ```console { data-plausible="copy-quick-deploy-sample" } $ microk8s kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator @@ -147,11 +165,15 @@ Hello world from Spin! ### Where to go from here -So far, we installed Microk8s, SpinKube, and a single Spin app. To have a more production-ready version, you might want to: +So far, we installed Microk8s, SpinKube, and a single Spin app. To have a more production-ready +version, you might want to: -- Generate TLS certificates and attach them to your Spin app to add HTTPS support. If you are using an ingress controller (see below), [here is the documentation for TLS config](https://kubernetes.github.io/ingress-nginx/user-guide/tls/). +- Generate TLS certificates and attach them to your Spin app to add HTTPS support. If you are using + an ingress controller (see below), [here is the documentation for TLS + config](https://kubernetes.github.io/ingress-nginx/user-guide/tls/). - Configure a [cluster ingress](https://microk8s.io/docs/addon-ingress) -- Set up another Linode Edge instsance and create a [two-node Microk8s cluster](https://microk8s.io/docs/clustering). +- Set up another Linode Edge instsance and create a [two-node Microk8s + cluster](https://microk8s.io/docs/clustering). ### Bonus: Configuring Microk8s ingress @@ -159,7 +181,9 @@ Microk8s includes an NGINX-based ingress controller that works great with Spin a Enable the ingress controller: `microk8s enable ingress` -Now we can create an ingress that routes our traffic to the `simple-spinapp` app. Create the file `ingress.yaml` with the following content. Note that the [`service.name`](http://service.name) is the name of our Spin app. +Now we can create an ingress that routes our traffic to the `simple-spinapp` app. Create the file +`ingress.yaml` with the following content. Note that the [`service.name`](http://service.name) is +the name of our Spin app. ```yaml apiVersion: networking.k8s.io/v1 @@ -179,12 +203,16 @@ spec: number: 80 ``` -Install the above with `microk8s kubectl -f ingress.yaml`. After a moment or two, you should be able to run `curl [localhost](http://localhost)` and see `Hello World!`. +Install the above with `microk8s kubectl -f ingress.yaml`. After a moment or two, you should be able +to run `curl [localhost](http://localhost)` and see `Hello World!`. ## Conclusion In this guide we've installed Spin, Microk8s, and SpinKube and then run a Spin application. -To learn more about the many things you can do with Spin apps, go to [the Spin developer docs](https://developer.fermyon.com/spin). You can also look at a variety of examples at [Spin Up Hub](https://developer.fermyon.com/hub). +To learn more about the many things you can do with Spin apps, go to [the Spin developer +docs](https://developer.fermyon.com/spin). You can also look at a variety of examples at [Spin Up +Hub](https://developer.fermyon.com/hub). -Or to try out different Kubernetes configurations, check out [other installation guides](https://www.spinkube.dev/docs/install/). +Or to try out different Kubernetes configurations, check out [other installation +guides](https://www.spinkube.dev/docs/install/). diff --git a/content/en/docs/install/quickstart.md b/content/en/docs/install/quickstart.md index 4ad1c3f3..cac34699 100644 --- a/content/en/docs/install/quickstart.md +++ b/content/en/docs/install/quickstart.md @@ -7,20 +7,26 @@ aliases: - /docs/spin-operator/quickstart --- -This Quickstart guide demonstrates how to set up a new Kubernetes cluster, install the SpinKube and deploy your first Spin application. +This Quickstart guide demonstrates how to set up a new Kubernetes cluster, install the SpinKube and +deploy your first Spin application. ## Prerequisites For this Quickstart guide, you will need: - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI -- [Rancher Desktop](https://rancherdesktop.io/) or [Docker Desktop](https://docs.docker.com/get-docker/) for managing containers and Kubernetes on your desktop -- [k3d](https://k3d.io/v5.6.0/?h=installation#installation) - a lightweight Kubernetes distribution that runs on Docker +- [Rancher Desktop](https://rancherdesktop.io/) or [Docker + Desktop](https://docs.docker.com/get-docker/) for managing containers and Kubernetes on your + desktop +- [k3d](https://k3d.io/v5.6.0/?h=installation#installation) - a lightweight Kubernetes distribution + that runs on Docker - [Helm](https://helm.sh/docs/intro/install/) - the package manager for Kubernetes ### Set up Your Kubernetes Cluster -1. Create a Kubernetes cluster with a k3d image that includes the [containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) prerequisite already installed: +1. Create a Kubernetes cluster with a k3d image that includes the + [containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) prerequisite already + installed: ```console { data-plausible="copy-quick-create-k3d" } k3d cluster create wasm-cluster \ @@ -29,7 +35,9 @@ k3d cluster create wasm-cluster \ --agents 2 ``` -> Note: Spin Operator requires a few Kubernetes resources that are installed globally to the cluster. We create these directly through `kubectl` as a best practice, since their lifetimes are usually managed separately from a given Spin Operator installation. +> Note: Spin Operator requires a few Kubernetes resources that are installed globally to the +> cluster. We create these directly through `kubectl` as a best practice, since their lifetimes are +> usually managed separately from a given Spin Operator installation. 2. Install cert-manager @@ -38,15 +46,20 @@ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/ kubectl wait --for=condition=available --timeout=300s deployment/cert-manager-webhook -n cert-manager ``` -3. Apply the [Runtime Class](https://github.com/spinkube/spin-operator/blob/main/config/samples/spin-runtime-class.yaml) used for scheduling Spin apps onto nodes running the shim: +3. Apply the [Runtime + Class](https://github.com/spinkube/spin-operator/blob/main/config/samples/spin-runtime-class.yaml) + used for scheduling Spin apps onto nodes running the shim: -> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` that matches nodes that have the shim installed. However, in the K3d example, they're installed on every node. +> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` +> that matches nodes that have the shim installed. However, in the K3d example, they're installed on +> every node. ```console { data-plausible="copy-quick-apply-runtime-class" } kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.runtime-class.yaml ``` -4. Apply the [Custom Resource Definitions]({{< ref "glossary#custom-resource-definition-crd" >}}) used by the Spin Operator: +4. Apply the [Custom Resource Definitions]({{< ref "glossary#custom-resource-definition-crd" >}}) + used by the Spin Operator: ```console { data-plausible="copy-quick-apply-crd" } kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.crds.yaml @@ -54,7 +67,10 @@ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0. ## Deploy the Spin Operator -Execute the following command to install the Spin Operator on the K3d cluster using Helm. This will create all of the Kubernetes resources required by Spin Operator under the Kubernetes namespace `spin-operator`. It may take a moment for the installation to complete as dependencies are installed and pods are spinning up. +Execute the following command to install the Spin Operator on the K3d cluster using Helm. This will +create all of the Kubernetes resources required by Spin Operator under the Kubernetes namespace +`spin-operator`. It may take a moment for the installation to complete as dependencies are installed +and pods are spinning up. ```console { data-plausible="copy-quick-deploy-operator" } # Install Spin Operator with Helm @@ -105,4 +121,5 @@ Hello world from Spin! Congrats on deploying your first SpinApp! Recommended next steps: - Scale your [Spin Apps with Horizontal Pod Autoscaler (HPA)]({{< ref "scaling-with-hpa" >}}) -- Scale your [Spin Apps with Kubernetes Event Driven Autoscaler (KEDA)]({{< ref "scaling-with-keda" >}}) +- Scale your [Spin Apps with Kubernetes Event Driven Autoscaler (KEDA)]({{< ref "scaling-with-keda" + >}}) diff --git a/content/en/docs/install/rancher-desktop.md b/content/en/docs/install/rancher-desktop.md index 9de07ca6..2c72d2ff 100644 --- a/content/en/docs/install/rancher-desktop.md +++ b/content/en/docs/install/rancher-desktop.md @@ -8,7 +8,8 @@ aliases: - /docs/spin-operator/tutorials/integrating-with-rancher-desktop --- -[Rancher Desktop](https://rancherdesktop.io/) is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop. +[Rancher Desktop](https://rancherdesktop.io/) is an open-source application that provides all the +essentials to work with containers and Kubernetes on your desktop. ### Prerequisites @@ -18,41 +19,50 @@ aliases: ### Step 1: Installing Rancher Desktop 1. **Download Rancher Desktop**: - - Navigate to the [Rancher Desktop releases page](https://github.com/rancher-sandbox/rancher-desktop/releases/tag/v1.14.0). + - Navigate to the [Rancher Desktop releases + page](https://github.com/rancher-sandbox/rancher-desktop/releases/tag/v1.14.0). - Select the appropriate installer for your operating system for version 1.14.0. 2. **Install Rancher Desktop**: - - Run the downloaded installer and follow the on-screen instructions to complete the installation. + - Run the downloaded installer and follow the on-screen instructions to complete the + installation. ### Step 2: Configure Rancher Desktop - Open Rancher Desktop. - Navigate to the **Preferences** -> **Kubernetes** menu. - - Ensure that the **Enable** **Kubernetes** is selected and that the **Enable Traefik** and **Install Spin Operator** Options are checked. Make sure to **Apply** your changes. + - Ensure that the **Enable** **Kubernetes** is selected and that the **Enable Traefik** and + **Install Spin Operator** Options are checked. Make sure to **Apply** your changes. ![Rancher Desktop](../rancher-desktop-kubernetes.png) - - Make sure to select `rancher-desktop` from the `Kubernetes Contexts` configuration in your toolbar. + - Make sure to select `rancher-desktop` from the `Kubernetes Contexts` configuration in your + toolbar. ![Kubernetes contexts](../rancher-desktop-contexts.png) - - Make sure that the Enable Wasm option is checked in the **Preferences** → **Container Engine section**. Remember to always apply your changes. + - Make sure that the Enable Wasm option is checked in the **Preferences** → **Container Engine + section**. Remember to always apply your changes. ![Rancher preferences](../rancher-desktop-preferences.png) - - Once your changes have been applied, go to the **Cluster Dashboard** → **More Resources** → **Cert Manager** section and click on **Certificates**. You will see the `spin-operator-serving-cert` is ready. + - Once your changes have been applied, go to the **Cluster Dashboard** → **More Resources** → + **Cert Manager** section and click on **Certificates**. You will see the + `spin-operator-serving-cert` is ready. ![Certificates tab](../rancher-desktop-certificates.png) ### Step 3: Creating a Spin Application 1. **Open a terminal** (Command Prompt, Terminal, or equivalent based on your OS). -2. **Create a new Spin application**: This command creates a new Spin application using the `http-js` template, named `hello-k3s`. +2. **Create a new Spin application**: This command creates a new Spin application using the + `http-js` template, named `hello-k3s`. ```bash $ spin new -t http-js hello-k3s --accept-defaults $ cd hello-k3s ``` -3. We can edit the `/src/index.js` file and make the workload return a string "Hello from Rancher Desktop": +3. We can edit the `/src/index.js` file and make the workload return a string "Hello from Rancher + Desktop": ```javascript export async function handleRequest(request) { @@ -99,11 +109,13 @@ This command prepares the necessary Kubernetes deployment configurations. $ spin kube deploy --from ttl.sh/hello-k3s:0.1.0 ``` -If we click on the Rancher Desktop’s “Cluster Dashboard”, we can see hello-k3s:0.1.0 running inside the “Workloads” dropdown section: +If we click on the Rancher Desktop’s “Cluster Dashboard”, we can see hello-k3s:0.1.0 running inside +the “Workloads” dropdown section: ![Rancher Desktop Preferences Wasm](../rancher-desktop-cluster.png) -To access our app outside of the cluster, we can forward the port so that we access the application from our host machine: +To access our app outside of the cluster, we can forward the port so that we access the application +from our host machine: ```bash $ kubectl port-forward svc/hello-k3s 8083:80 @@ -116,6 +128,7 @@ $ curl localhost:8083 Hello from Rancher Desktop ``` -The above `curl` command or a quick visit to your browser at localhost:8083 will return the "Hello from Rancher Desktop" message: +The above `curl` command or a quick visit to your browser at localhost:8083 will return the "Hello +from Rancher Desktop" message: ![Hello from Rancher Desktop](../rancher-desktop-hello.png) diff --git a/content/en/docs/install/spin-kube-plugin.md b/content/en/docs/install/spin-kube-plugin.md index a0ed0ed6..a1f0ab62 100644 --- a/content/en/docs/install/spin-kube-plugin.md +++ b/content/en/docs/install/spin-kube-plugin.md @@ -7,15 +7,18 @@ aliases: - /docs/spin-plugin-kube/installation --- -The `kube` plugin for `spin` (The Spin CLI) provides first class experience for working with Spin apps in the context of Kubernetes. +The `kube` plugin for `spin` (The Spin CLI) provides first class experience for working with Spin +apps in the context of Kubernetes. ## Prerequisites -Ensure you have the Spin CLI ([version 2.3.1 or newer](https://developer.fermyon.com/spin/v2/upgrade)) installed on your machine. +Ensure you have the Spin CLI ([version 2.3.1 or +newer](https://developer.fermyon.com/spin/v2/upgrade)) installed on your machine. ## Install the plugin -Before you install the plugin, you should fetch the list of latest Spin plugins from the spin-plugins repository: +Before you install the plugin, you should fetch the list of latest Spin plugins from the +spin-plugins repository: ```sh # Update the list of latest Spin plugins diff --git a/content/en/docs/misc/compatibility.md b/content/en/docs/misc/compatibility.md index 6b8861f7..d6ce15eb 100644 --- a/content/en/docs/misc/compatibility.md +++ b/content/en/docs/misc/compatibility.md @@ -7,7 +7,8 @@ aliases: - /docs/compatibility --- -See the following list of compatible Kubernetes distributions and platforms for running the [Spin Operator](https://github.com/spinkube/spin-operator/): +See the following list of compatible Kubernetes distributions and platforms for running the [Spin +Operator](https://github.com/spinkube/spin-operator/): - [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/) - [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/products/kubernetes-service) @@ -15,24 +16,38 @@ See the following list of compatible Kubernetes distributions and platforms for - [Digital Ocean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) - [k3d](https://k3d.io) - - [minikube](https://minikube.sigs.k8s.io/docs/) (explicitly pass `--container-runtime=containerd` and ensure you're on minikube version `>= 1.33`) + - [minikube](https://minikube.sigs.k8s.io/docs/) (explicitly pass `--container-runtime=containerd` + and ensure you're on minikube version `>= 1.33`) - [Scaleway Kubernetes Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) -> **Disclaimer**: Please note that this is a working list of compatible Kubernetes distributions and platforms. For managed Kubernetes services, it's important to be aware that cloud providers may choose to discontinue support for specific dependencies, such as container runtimes. While we strive to maintain the accuracy of this documentation, it is ultimately your responsibility to verify with your Kubernetes provider whether the required dependencies are still supported. +> **Disclaimer**: Please note that this is a working list of compatible Kubernetes distributions and +> platforms. For managed Kubernetes services, it's important to be aware that cloud providers may +> choose to discontinue support for specific dependencies, such as container runtimes. While we +> strive to maintain the accuracy of this documentation, it is ultimately your responsibility to +> verify with your Kubernetes provider whether the required dependencies are still supported. ### How to validate Spin Operator Compatibility -If you would like to validate Spin Operator's compatibility with a new specific Kubernetes distribution or platform or simply test one of the platforms listed above yourself, follow these steps for validation: +If you would like to validate Spin Operator's compatibility with a new specific Kubernetes +distribution or platform or simply test one of the platforms listed above yourself, follow these +steps for validation: -1. **Install the Spin Operator**: Begin by installing the Spin Operator within the Kubernetes cluster. This involves deploying the necessary dependencies and the Spin Operator itself. (See [Installing with Helm]({{< ref "installing-with-helm" >}})) +1. **Install the Spin Operator**: Begin by installing the Spin Operator within the Kubernetes + cluster. This involves deploying the necessary dependencies and the Spin Operator itself. (See + [Installing with Helm]({{< ref "installing-with-helm" >}})) -2. **Create, Package, and Deploy a Spin App**: Proceed by creating a Spin App, packaging it, and successfully deploying it within the Kubernetes environment. (See [Package and Deploy Spin Apps]({{< ref "packaging" >}})) +2. **Create, Package, and Deploy a Spin App**: Proceed by creating a Spin App, packaging it, and + successfully deploying it within the Kubernetes environment. (See [Package and Deploy Spin + Apps]({{< ref "packaging" >}})) -3. **Invoke the Spin App**: Once the Spin App is deployed, ensure at least one request was successfully served by the Spin App. +3. **Invoke the Spin App**: Once the Spin App is deployed, ensure at least one request was + successfully served by the Spin App. ## Container Runtime Constraints -The Spin Operator requires the target nodes that would run Spin applications to support containerd version [`1.6.26+`](https://github.com/containerd/containerd/releases/tag/v1.6.26) or [`1.7.7+`](https://github.com/containerd/containerd/releases/tag/v1.7.7). +The Spin Operator requires the target nodes that would run Spin applications to support containerd +version [`1.6.26+`](https://github.com/containerd/containerd/releases/tag/v1.6.26) or +[`1.7.7+`](https://github.com/containerd/containerd/releases/tag/v1.7.7). Use the `kubectl get nodes -o wide` command to see which container runtime is installed per node: diff --git a/content/en/docs/misc/integrations.md b/content/en/docs/misc/integrations.md index 072514dc..0a8b6090 100644 --- a/content/en/docs/misc/integrations.md +++ b/content/en/docs/misc/integrations.md @@ -11,8 +11,25 @@ aliases: ## KEDA -[Kubernetes Event-Driven Autoscaling (KEDA)](https://keda.sh/) provides event-driven autoscaling for Kubernetes workloads. It allows Kubernetes to automatically scale applications in response to external events such as messages in a queue, enabling more efficient resource utilization and responsive scaling based on actual demand, rather than static metrics. KEDA serves as a bridge between Kubernetes and various event sources, making it easier to scale applications dynamically in a cloud-native environment. If you would like to see how SpinKube integrates with KEDA, please read the ["Scaling With KEDA" tutorial]({{< ref "scaling-with-keda" >}}) which deploys a SpinApp and the KEDA ScaledObject instance onto a cluster. The tutorial also uses Bombardier to generate traffic to test how well KEDA scales our SpinApp. +[Kubernetes Event-Driven Autoscaling (KEDA)](https://keda.sh/) provides event-driven autoscaling for +Kubernetes workloads. It allows Kubernetes to automatically scale applications in response to +external events such as messages in a queue, enabling more efficient resource utilization and +responsive scaling based on actual demand, rather than static metrics. KEDA serves as a bridge +between Kubernetes and various event sources, making it easier to scale applications dynamically in +a cloud-native environment. If you would like to see how SpinKube integrates with KEDA, please read +the ["Scaling With KEDA" tutorial]({{< ref "scaling-with-keda" >}}) which deploys a SpinApp and the +KEDA ScaledObject instance onto a cluster. The tutorial also uses Bombardier to generate traffic to +test how well KEDA scales our SpinApp. ## Rancher Desktop -The [release of Rancher Desktop 1.13.0](https://www.suse.com/c/rancher_blog/rancher-desktop-1-13-with-support-for-webassembly-and-more/) comes with basic support for running WebAssembly (Wasm) containers and deploying them to Kubernetes. Rancher Desktop by SUSE, is an open-source application that provides all the essentials to work with containers and Kubernetes on your desktop. If you would like to see how SpinKube integrates with Rancher Desktop, please read the ["Integrating With Rancher Desktop" tutorial]({{< ref "/docs/install/rancher-desktop" >}}) which walks through the steps of installing the necessary components for SpinKube (including the CertManager for SSL, CRDs and the KWasm runtime class manager using Helm charts). The tutorial then demonstrates how to create a simple Spin JavaScript application and deploys the application within Rancher Desktop's local cluster. +The [release of Rancher Desktop +1.13.0](https://www.suse.com/c/rancher_blog/rancher-desktop-1-13-with-support-for-webassembly-and-more/) +comes with basic support for running WebAssembly (Wasm) containers and deploying them to Kubernetes. +Rancher Desktop by SUSE, is an open-source application that provides all the essentials to work with +containers and Kubernetes on your desktop. If you would like to see how SpinKube integrates with +Rancher Desktop, please read the ["Integrating With Rancher Desktop" tutorial]({{< ref +"/docs/install/rancher-desktop" >}}) which walks through the steps of installing the necessary +components for SpinKube (including the CertManager for SSL, CRDs and the KWasm runtime class manager +using Helm charts). The tutorial then demonstrates how to create a simple Spin JavaScript +application and deploys the application within Rancher Desktop's local cluster. diff --git a/content/en/docs/overview.md b/content/en/docs/overview.md index 767d69a0..a2bf761b 100644 --- a/content/en/docs/overview.md +++ b/content/en/docs/overview.md @@ -8,18 +8,28 @@ tags: [] # Project Overview -[SpinKube](https://github.com/spinkube) is a new open source project that streamlines the experience of developing, deploying, and operating Wasm workloads on Kubernetes, using [Spin](https://github.com/fermyon/spin) in tandem with the [runwasi](https://github.com/containerd/runwasi) and [runtime class manager](https://github.com/spinkube/runtime-class-manager) (formerly [KWasm](https://kwasm.sh/)) open source projects. +[SpinKube](https://github.com/spinkube) is a new open source project that streamlines the experience +of developing, deploying, and operating Wasm workloads on Kubernetes, using +[Spin](https://github.com/fermyon/spin) in tandem with the +[runwasi](https://github.com/containerd/runwasi) and [runtime class +manager](https://github.com/spinkube/runtime-class-manager) (formerly [KWasm](https://kwasm.sh/)) +open source projects. With SpinKube, you can leverage the advantages of using WebAssembly (Wasm) for your workloads: - Artifacts are significantly smaller in size compared to container images. -- Artifacts can be quickly fetched over the network and started much faster (\*Note: We are aware of several optimizations that still need to be implemented to enhance the startup time for workloads). +- Artifacts can be quickly fetched over the network and started much faster (\*Note: We are aware of + several optimizations that still need to be implemented to enhance the startup time for + workloads). - Substantially fewer resources are required during idle times. -Thanks to Spin Operator, we can do all of this while integrating with Kubernetes primitives (DNS, probes, autoscaling, metrics, and many more cloud native and CNCF projects). +Thanks to Spin Operator, we can do all of this while integrating with Kubernetes primitives (DNS, +probes, autoscaling, metrics, and many more cloud native and CNCF projects). ![SpinKube Project Overview Diagram](../spinkube-overview-diagram.png) -Spin Operator watches [Spin App Custom Resources]({{< ref "glossary#spinapp-manifest" >}}) and realizes the desired state in the Kubernetes cluster. The foundation of this project was built using the kubebuilder framework and contains a Spin App Custom Resource Definition (CRD) and controller. +Spin Operator watches [Spin App Custom Resources]({{< ref "glossary#spinapp-manifest" >}}) and +realizes the desired state in the Kubernetes cluster. The foundation of this project was built using +the kubebuilder framework and contains a Spin App Custom Resource Definition (CRD) and controller. To get started, check out our [Quickstart guide]({{< ref "quickstart" >}}). diff --git a/content/en/docs/reference/spin-app-executor.md b/content/en/docs/reference/spin-app-executor.md index 05eeb91b..4501d11c 100644 --- a/content/en/docs/reference/spin-app-executor.md +++ b/content/en/docs/reference/spin-app-executor.md @@ -98,8 +98,7 @@ createDeployment is true.
[back to parent](#spinappexecutorspec) -DeploymentConfig specifies how the deployment should be configured when -createDeployment is true. +DeploymentConfig specifies how the deployment should be configured when createDeployment is true. diff --git a/content/en/docs/reference/spin-app.md b/content/en/docs/reference/spin-app.md index 690834f3..f6c939bd 100644 --- a/content/en/docs/reference/spin-app.md +++ b/content/en/docs/reference/spin-app.md @@ -81,9 +81,9 @@ SpinAppSpec defines the desired state of SpinApp Executor controls how this app is executed in the cluster. -Defaults to whatever executor is available on the cluster. If multiple -executors are available then the first executor in alphabetical order -will be chosen. If no executors are available then no default will be set.
+Defaults to whatever executor is available on the cluster. If multiple executors are available then +the first executor in alphabetical order will be chosen. If no executors are available then no +default will be set.
@@ -112,9 +112,9 @@ will be chosen. If no executors are available then no default will be set.
@@ -521,8 +521,8 @@ HTTPHealthProbeHeader is an abstraction around a http header key/value pair. [back to parent](#spinappspec) -LocalObjectReference contains enough information to let you locate the -referenced object inside the same namespace. +LocalObjectReference contains enough information to let you locate the referenced object inside the +same namespace.
true
boolean EnableAutoscaling indicates whether the app is allowed to autoscale. If -true then the operator leaves the replica count of the underlying -deployment to be managed by an external autoscaler (HPA/KEDA). Replicas -cannot be defined if this is enabled. By default EnableAutoscaling is false.
+true then the operator leaves the replica count of the underlying deployment to be managed by an +external autoscaler (HPA/KEDA). Replicas cannot be defined if this is enabled. By default +EnableAutoscaling is false.

Default: false
@@ -1351,8 +1351,9 @@ TODO: Add other useful fields. apiVersion, kind, uid?
[back to parent](#spinappspecvariablesindexvaluefrom) -Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, -spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. +Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, +`metadata.annotations['']`, spec.nodeName, spec.serviceAccountName, status.hostIP, +status.podIP, status.podIPs.
@@ -1385,8 +1386,9 @@ spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podI [back to parent](#spinappspecvariablesindexvaluefrom) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. +Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, +limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are +currently supported.
@@ -1525,16 +1527,15 @@ recursively. If ReadOnly is false, this field has no meaning and must be unspecified. -If ReadOnly is true, and this field is set to Disabled, the mount is not made -recursively read-only. If this field is set to IfPossible, the mount is made -recursively read-only, if it is supported by the container runtime. If this -field is set to Enabled, the mount is made recursively read-only if it is -supported by the container runtime, otherwise the pod will not be started and -an error will be generated to indicate the reason. +If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. +If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by +the container runtime. If this field is set to Enabled, the mount is made recursively read-only if +it is supported by the container runtime, otherwise the pod will not be started and an error will be +generated to indicate the reason. -If this field is set to IfPossible or Enabled, MountPropagation must be set to -None (or be unspecified, which defaults to None). +If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be +unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled.
@@ -1553,9 +1554,9 @@ Defaults to "" (volume's root).
@@ -1662,29 +1663,22 @@ The volume's lifecycle is tied to the pod that defines it - it will be created b and deleted when the pod is removed. -Use this if: -a) the volume is only needed while the pod runs, -b) features of normal volumes like restoring from snapshot or capacity - tracking are needed, -c) the storage driver is specified through a storage class, and -d) the storage driver supports dynamic volume provisioning through - a PersistentVolumeClaim (see EphemeralVolumeSource for more - information on the connection between this volume type - and PersistentVolumeClaim). +Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like +restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through +a storage class, and d) the storage driver supports dynamic volume provisioning through a + PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between +this volume type and PersistentVolumeClaim). -Use PersistentVolumeClaim or one of the vendor-specific -APIs for volumes that persist for longer than the lifecycle -of an individual pod. +Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer +than the lifecycle of an individual pod. -Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to -be used that way - see the documentation of the driver for -more information. +Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - +see the documentation of the driver for more information. -A pod can use both types of ephemeral volumes and -persistent volumes at the same time.
+A pod can use both types of ephemeral volumes and persistent volumes at the same time.
@@ -1714,8 +1708,8 @@ provisioned/attached using an exec based plugin.
@@ -1723,9 +1717,9 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk @@ -1741,13 +1735,12 @@ More info: https://examples.k8s.io/volumes/glusterfs/README.md
@@ -1755,8 +1748,8 @@ mount host directories as read/write.
@@ -1772,8 +1765,8 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
@@ -1849,9 +1842,9 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
[back to parent](#spinappspecvolumesindex) -awsElasticBlockStore represents an AWS Disk resource that is attached to a -kubelet's host machine and then exposed to the pod. -More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore +awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine +and then exposed to the pod. More info: +https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
string Expanded path within the volume from which the container's volume should be mounted. -Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. -Defaults to "" (volume's root). -SubPathExpr and SubPath are mutually exclusive.
+Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the +container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually +exclusive.
false
false
object gcePersistentDisk represents a GCE Disk resource that is attached to a -kubelet's host machine and then exposed to the pod. -More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
+kubelet's host machine and then exposed to the pod. More info: +https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
false
object gitRepo represents a git repository at a particular revision. -DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an -EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir -into the Pod's container.
+DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into +an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's +container.
false
object hostPath represents a pre-existing file or directory on the host -machine that is directly exposed to the container. This is generally -used for system agents or other privileged things that are allowed -to see the host machine. Most containers will NOT need this. -More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath +machine that is directly exposed to the container. This is generally used for system agents or other +privileged things that are allowed to see the host machine. Most containers will NOT need this. More +info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- -TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not -mount host directories as read/write.
+TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host +directories as read/write.
false
object iscsi represents an ISCSI Disk resource that is attached to a -kubelet's host machine and then exposed to the pod. -More info: https://examples.k8s.io/volumes/iscsi/README.md
+kubelet's host machine and then exposed to the pod. More info: +https://examples.k8s.io/volumes/iscsi/README.md
false
object persistentVolumeClaimVolumeSource represents a reference to a -PersistentVolumeClaim in the same namespace. -More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+PersistentVolumeClaim in the same namespace. More info: +https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
false
@@ -2081,8 +2074,8 @@ More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
[back to parent](#spinappspecvolumesindexcephfs) -secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. -More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it +secretRef is Optional: SecretRef is reference to the authentication secret for User, default is +empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
@@ -2110,8 +2103,8 @@ TODO: Add other useful fields. apiVersion, kind, uid?
[back to parent](#spinappspecvolumesindex) -cinder represents a cinder volume attached and mounted on kubelets host machine. -More info: https://examples.k8s.io/mysql-cinder-pd/README.md +cinder represents a cinder volume attached and mounted on kubelets host machine. More info: +https://examples.k8s.io/mysql-cinder-pd/README.md
@@ -2165,8 +2158,7 @@ to OpenStack.
[back to parent](#spinappspecvolumesindexcinder) -secretRef is optional: points to a secret object containing parameters used to connect -to OpenStack. +secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.
@@ -2307,7 +2299,8 @@ mode, like fsGroup, and the result can be other mode bits set.
[back to parent](#spinappspecvolumesindex) -csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). +csi (Container Storage Interface) represents ephemeral storage that is handled by certain external +CSI drivers (Beta feature).
@@ -2370,11 +2363,10 @@ driver. Consult your driver's documentation for supported values.
[back to parent](#spinappspecvolumesindexcsi) -nodePublishSecretRef is a reference to the secret object containing -sensitive information to pass to the CSI driver to complete the CSI -NodePublishVolume and NodeUnpublishVolume calls. -This field is optional, and may be empty if no secret is required. If the -secret object contains more than one secret, all secret references are passed. +nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to +the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is +optional, and may be empty if no secret is required. If the secret object contains more than one +secret, all secret references are passed.
@@ -2499,7 +2491,8 @@ mode, like fsGroup, and the result can be other mode bits set.
[back to parent](#spinappspecvolumesindexdownwardapiitemsindex) -Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. +Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are +supported.
@@ -2532,8 +2525,8 @@ Required: Selects a field of the pod: only annotations, labels, name, namespace [back to parent](#spinappspecvolumesindexdownwardapiitemsindex) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. +Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, +requests.cpu and requests.memory) are currently supported.
@@ -2573,8 +2566,8 @@ Selects a resource of the container: only resources limits and requests [back to parent](#spinappspecvolumesindex) -emptyDir represents a temporary directory that shares a pod's lifetime. -More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir +emptyDir represents a temporary directory that shares a pod's lifetime. More info: +https://kubernetes.io/docs/concepts/storage/volumes#emptydir
@@ -2615,34 +2608,27 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
[back to parent](#spinappspecvolumesindex) -ephemeral represents a volume that is handled by a cluster storage driver. -The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, -and deleted when the pod is removed. +ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is +tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod +is removed. -Use this if: -a) the volume is only needed while the pod runs, -b) features of normal volumes like restoring from snapshot or capacity - tracking are needed, -c) the storage driver is specified through a storage class, and -d) the storage driver supports dynamic volume provisioning through - a PersistentVolumeClaim (see EphemeralVolumeSource for more - information on the connection between this volume type - and PersistentVolumeClaim). +Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like +restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through +a storage class, and d) the storage driver supports dynamic volume provisioning through a + PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between +this volume type and PersistentVolumeClaim). -Use PersistentVolumeClaim or one of the vendor-specific -APIs for volumes that persist for longer than the lifecycle -of an individual pod. +Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer +than the lifecycle of an individual pod. -Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to -be used that way - see the documentation of the driver for -more information. +Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - +see the documentation of the driver for more information. -A pod can use both types of ephemeral volumes and -persistent volumes at the same time. +A pod can use both types of ephemeral volumes and persistent volumes at the same time.
@@ -2666,18 +2652,15 @@ entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). -An existing PVC with that name that is not owned by the pod -will *not* be used for the pod to avoid using an unrelated -volume by mistake. Starting the pod is then blocked until -the unrelated PVC is removed. If such a pre-created PVC is -meant to be used by the pod, the PVC has to updated with an -owner reference to the pod once the pod exists. Normally -this should not be necessary, but it may be useful when -manually reconstructing a broken cluster. +An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid +using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is +removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an +owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be +useful when manually reconstructing a broken cluster. -This field is read-only and no changes will be made by Kubernetes -to the PVC after it has been created. +This field is read-only and no changes will be made by Kubernetes to the PVC after it has been +created. Required, must not be nil.
@@ -2691,27 +2674,22 @@ Required, must not be nil.
[back to parent](#spinappspecvolumesindexephemeral) -Will be used to create a stand-alone PVC to provision the volume. -The pod in which this EphemeralVolumeSource is embedded will be the -owner of the PVC, i.e. the PVC will be deleted together with the -pod. The name of the PVC will be `-` where -`` is the name from the `PodSpec.Volumes` array -entry. Pod validation will reject the pod if the concatenated name -is not valid for a PVC (for example, too long). +Will be used to create a stand-alone PVC to provision the volume. The pod in which this +EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted +together with the pod. The name of the PVC will be `-` where `` +is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the +concatenated name is not valid for a PVC (for example, too long). -An existing PVC with that name that is not owned by the pod -will *not* be used for the pod to avoid using an unrelated -volume by mistake. Starting the pod is then blocked until -the unrelated PVC is removed. If such a pre-created PVC is -meant to be used by the pod, the PVC has to updated with an -owner reference to the pod once the pod exists. Normally -this should not be necessary, but it may be useful when -manually reconstructing a broken cluster. +An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid +using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is +removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an +owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be +useful when manually reconstructing a broken cluster. -This field is read-only and no changes will be made by Kubernetes -to the PVC after it has been created. +This field is read-only and no changes will be made by Kubernetes to the PVC after it has been +created. Required, must not be nil. @@ -2752,10 +2730,9 @@ validation.
[back to parent](#spinappspecvolumesindexephemeralvolumeclaimtemplate) -The specification for the PersistentVolumeClaim. The entire content is -copied unchanged into the PVC that gets created from this -template. The same fields as in a PersistentVolumeClaim -are also valid here. +The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC +that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid +here.
@@ -2886,12 +2863,12 @@ Value of Filesystem is implied when not included in claim spec.
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) -* An existing PVC (PersistentVolumeClaim) -If the provisioner or an external controller can support the specified data source, -it will create a new volume based on the contents of the specified data source. -When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, -and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. -If the namespace is specified, then dataSourceRef will not be copied to dataSource. +* An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support +the specified data source, it will create a new volume based on the contents of the specified data +source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to +dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace +is not specified. If the namespace is specified, then dataSourceRef will not be copied to +dataSource.
@@ -2934,28 +2911,23 @@ For any other third-party types, APIGroup is required.
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty -volume is desired. This may be any object from a non-empty API group (non -core object) or a PersistentVolumeClaim object. -When this field is specified, volume binding will only succeed if the type of -the specified object matches some installed volume populator or dynamic -provisioner. -This field will replace the functionality of the dataSource field and as such -if both fields are non-empty, they must have the same value. For backwards -compatibility, when namespace isn't specified in dataSourceRef, -both fields (dataSource and dataSourceRef) will be set to the same -value automatically if one of them is empty and the other is non-empty. -When namespace is specified in dataSourceRef, -dataSource isn't set to the same value and must be empty. -There are three important differences between dataSource and dataSourceRef: -* While dataSource only allows two specific types of objects, dataSourceRef - allows any non-core object, as well as PersistentVolumeClaim objects. -* While dataSource ignores disallowed values (dropping them), dataSourceRef - preserves all values, and generates an error if a disallowed value is - specified. -* While dataSource only allows local objects, dataSourceRef allows objects - in any namespaces. -(Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. -(Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. +volume is desired. This may be any object from a non-empty API group (non core object) or a +PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the +type of the specified object matches some installed volume populator or dynamic provisioner. This +field will replace the functionality of the dataSource field and as such if both fields are +non-empty, they must have the same value. For backwards compatibility, when namespace isn't +specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value +automatically if one of them is empty and the other is non-empty. When namespace is specified in +dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important +differences between dataSource and dataSourceRef: +* While dataSource only allows two specific types of objects, dataSourceRef allows any non-core + object, as well as PersistentVolumeClaim objects. +* While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, + and generates an error if a disallowed value is specified. +* While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) + Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the +namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be +enabled.
@@ -3006,11 +2978,10 @@ Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGr [back to parent](#spinappspecvolumesindexephemeralvolumeclaimtemplatespec) -resources represents the minimum resources the volume should have. -If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements -that are lower than previous value but must still be higher than capacity recorded in the -status field of the claim. -More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources +resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure +feature is enabled users are allowed to specify resource requirements that are lower than previous +value but must still be higher than capacity recorded in the status field of the claim. More info: +https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
@@ -3082,8 +3053,8 @@ operator is "In", and the values array contains only "value". The requirements a [back to parent](#spinappspecvolumesindexephemeralvolumeclaimtemplatespecselector) -A label selector requirement is a selector that contains values, a key, and an operator that -relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that relates +the key and values.
@@ -3127,7 +3098,8 @@ merge patch.
[back to parent](#spinappspecvolumesindex) -fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. +fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed +to the pod.
@@ -3188,8 +3160,8 @@ Either wwids or combination of targetWWNs and lun must be set, but not both simu [back to parent](#spinappspecvolumesindex) -flexVolume represents a generic volume resource that is -provisioned/attached using an exec based plugin. +flexVolume represents a generic volume resource that is provisioned/attached using an exec based +plugin.
@@ -3250,11 +3222,9 @@ scripts.
[back to parent](#spinappspecvolumesindexflexvolume) -secretRef is Optional: secretRef is reference to the secret object containing -sensitive information to pass to the plugin scripts. This may be -empty if no secret object is specified. If the secret object -contains more than one secret, all secrets are passed to the plugin -scripts. +secretRef is Optional: secretRef is reference to the secret object containing sensitive information +to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret +object contains more than one secret, all secrets are passed to the plugin scripts.
@@ -3282,7 +3252,8 @@ TODO: Add other useful fields. apiVersion, kind, uid?
[back to parent](#spinappspecvolumesindex) -flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running +flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the +Flocker control service being running
@@ -3316,9 +3287,9 @@ should be considered as deprecated
[back to parent](#spinappspecvolumesindex) -gcePersistentDisk represents a GCE Disk resource that is attached to a -kubelet's host machine and then exposed to the pod. -More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk +gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and +then exposed to the pod. More info: +https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
@@ -3378,10 +3349,9 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk [back to parent](#spinappspecvolumesindex) -gitRepo represents a git repository at a particular revision. -DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an -EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir -into the Pod's container. +gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To +provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo +using git, then mount the EmptyDir into the Pod's container.
@@ -3424,8 +3394,8 @@ the subdirectory with the given name.
[back to parent](#spinappspecvolumesindex) -glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. -More info: https://examples.k8s.io/volumes/glusterfs/README.md +glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: +https://examples.k8s.io/volumes/glusterfs/README.md
@@ -3469,14 +3439,13 @@ More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
[back to parent](#spinappspecvolumesindex) -hostPath represents a pre-existing file or directory on the host -machine that is directly exposed to the container. This is generally -used for system agents or other privileged things that are allowed -to see the host machine. Most containers will NOT need this. -More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath +hostPath represents a pre-existing file or directory on the host machine that is directly exposed to +the container. This is generally used for system agents or other privileged things that are allowed +to see the host machine. Most containers will NOT need this. More info: +https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- -TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not -mount host directories as read/write. +TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host +directories as read/write.
@@ -3513,9 +3482,8 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
[back to parent](#spinappspecvolumesindex) -iscsi represents an ISCSI Disk resource that is attached to a -kubelet's host machine and then exposed to the pod. -More info: https://examples.k8s.io/volumes/iscsi/README.md +iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then +exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
@@ -3651,8 +3619,8 @@ TODO: Add other useful fields. apiVersion, kind, uid?
[back to parent](#spinappspecvolumesindex) -nfs represents an NFS mount on the host that shares a pod's lifetime -More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs +nfs represents an NFS mount on the host that shares a pod's lifetime More info: +https://kubernetes.io/docs/concepts/storage/volumes#nfs
@@ -3696,9 +3664,9 @@ More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
[back to parent](#spinappspecvolumesindex) -persistentVolumeClaimVolumeSource represents a reference to a -PersistentVolumeClaim in the same namespace. -More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims +persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same +namespace. More info: +https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
@@ -3733,7 +3701,8 @@ Default false.
[back to parent](#spinappspecvolumesindex) -photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine +photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets +host machine
@@ -3873,15 +3842,14 @@ of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. -ClusterTrustBundle objects can either be selected by name, or by the -combination of signer name and a label selector. +ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and +a label selector. -Kubelet performs aggressive normalization of the PEM contents written -into the pod filesystem. Esoteric PEM features such as inter-block -comments and block headers are stripped. Certificates are deduplicated. -The ordering of certificates within the file is arbitrary, and Kubelet -may change the order over time.
+Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. +Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are +deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the +order over time.
@@ -3920,22 +3888,21 @@ may change the order over time.
[back to parent](#spinappspecvolumesindexprojectedsourcesindex) -ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field -of ClusterTrustBundle objects in an auto-updating file. +ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field of ClusterTrustBundle +objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. -ClusterTrustBundle objects can either be selected by name, or by the -combination of signer name and a label selector. +ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and +a label selector. -Kubelet performs aggressive normalization of the PEM contents written -into the pod filesystem. Esoteric PEM features such as inter-block -comments and block headers are stripped. Certificates are deduplicated. -The ordering of certificates within the file is arbitrary, and Kubelet -may change the order over time. +Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. +Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are +deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the +order over time.
false
@@ -3999,10 +3966,9 @@ ClusterTrustBundles will be unified and deduplicated.
[back to parent](#spinappspecvolumesindexprojectedsourcesindexclustertrustbundle) -Select all ClusterTrustBundles that match this label selector. Only has -effect if signerName is set. Mutually-exclusive with name. If unset, -interpreted as "match nothing". If set but empty, interpreted as "match -everything". +Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is +set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, +interpreted as "match everything".
@@ -4034,11 +4000,12 @@ operator is "In", and the values array contains only "value". The requirements a ### `SpinApp.spec.volumes[index].projected.sources[index].clusterTrustBundle.labelSelector.matchExpressions[index]` -[back to parent](#spinappspecvolumesindexprojectedsourcesindexclustertrustbundlelabelselector) +[back to +parent](#spinappspecvolumesindexprojectedsourcesindexclustertrustbundlelabelselector) -A label selector requirement is a selector that contains values, a key, and an operator that -relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that relates +the key and values.
@@ -4261,7 +4228,8 @@ mode, like fsGroup, and the result can be other mode bits set.
[back to parent](#spinappspecvolumesindexprojectedsourcesindexdownwardapiitemsindex) -Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. +Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are +supported.
@@ -4294,8 +4262,8 @@ Required: Selects a field of the pod: only annotations, labels, name, namespace [back to parent](#spinappspecvolumesindexprojectedsourcesindexdownwardapiitemsindex) -Selects a resource of the container: only resources limits and requests -(limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. +Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, +requests.cpu and requests.memory) are currently supported.
@@ -4551,8 +4519,8 @@ Defaults to serivceaccount user
[back to parent](#spinappspecvolumesindex) -rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. -More info: https://examples.k8s.io/volumes/rbd/README.md +rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: +https://examples.k8s.io/volumes/rbd/README.md
@@ -4644,10 +4612,8 @@ More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
[back to parent](#spinappspecvolumesindexrbd) -secretRef is name of the authentication secret for RBDUser. If provided -overrides keyring. -Default is nil. -More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it +secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default +is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
@@ -4771,8 +4737,8 @@ that is associated with this volume source.
[back to parent](#spinappspecvolumesindexscaleio) -secretRef references to the secret for ScaleIO user and other -sensitive information. If this is not provided, Login operation will fail. +secretRef references to the secret for ScaleIO user and other sensitive information. If this is not +provided, Login operation will fail.
@@ -4800,8 +4766,8 @@ TODO: Add other useful fields. apiVersion, kind, uid?
[back to parent](#spinappspecvolumesindex) -secret represents a secret that should populate this volume. -More info: https://kubernetes.io/docs/concepts/storage/volumes#secret +secret represents a secret that should populate this volume. More info: +https://kubernetes.io/docs/concepts/storage/volumes#secret
@@ -4977,8 +4943,8 @@ Namespaces that do not pre-exist within StorageOS will be created.
[back to parent](#spinappspecvolumesindexstorageos) -secretRef specifies the secret to use for obtaining the StorageOS API -credentials. If not specified, default values will be attempted. +secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not +specified, default values will be attempted.
@@ -5106,17 +5072,15 @@ For further information see: https://github.com/kubernetes/community/blob/master Condition contains details for one aspect of the current state of this API Resource. --- -This struct is intended for direct use as an array at the field path .status.conditions. For example, +This struct is intended for direct use as an array at the field path .status.conditions. For +example, - type FooStatus struct{ - // Represents the observations of a foo's current state. - // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" - // +patchMergeKey=type - // +patchStrategy=merge - // +listType=map - // +listMapKey=type - Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` + type FooStatus struct{ // Represents the observations of a foo's current state. // Known + .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type + // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition + `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" + protobuf:"bytes,1,rep,name=conditions"` // other fields diff --git a/content/en/docs/topics/architecture.md b/content/en/docs/topics/architecture.md index d8b27980..834b7b50 100644 --- a/content/en/docs/topics/architecture.md +++ b/content/en/docs/topics/architecture.md @@ -33,16 +33,15 @@ composed of two main components: ![spin-operator diagram](../spin-operator-diagram.png) SpinApps CRDs can be [composed manually]({{< ref "glossary#custom-resource-definition-crd" >}}) or -generated automatically from an existing Spin application using the [`spin kube scaffold`](#spin-plugin-kube) command. -The former approach lends itself well to CI/CD systems, whereas the latter is a better fit for local -testing as part of a local developer workflow. +generated automatically from an existing Spin application using the [`spin kube +scaffold`](#spin-plugin-kube) command. The former approach lends itself well to CI/CD systems, +whereas the latter is a better fit for local testing as part of a local developer workflow. Once an application deployment begins, Spin Operator handles scheduling the workload on the -appropriate nodes (thanks to the [Runtime Class Manager](#runtime-class-manager), -previously known as Kwasm) and managing the resource's lifecycle. There is no need to fetch the -[`containerd-shim-spin`](#containerd-shim-spin) binary or mutate node labels. This is all -managed via the Runtime Class Manager, which you will install as a dependency when setting up Spin -Operator. +appropriate nodes (thanks to the [Runtime Class Manager](#runtime-class-manager), previously known +as Kwasm) and managing the resource's lifecycle. There is no need to fetch the +[`containerd-shim-spin`](#containerd-shim-spin) binary or mutate node labels. This is all managed +via the Runtime Class Manager, which you will install as a dependency when setting up Spin Operator. ## containerd-shim-spin diff --git a/content/en/docs/topics/assigning-variables.md b/content/en/docs/topics/assigning-variables.md index 67c4333c..179fdb56 100644 --- a/content/en/docs/topics/assigning-variables.md +++ b/content/en/docs/topics/assigning-variables.md @@ -10,13 +10,22 @@ aliases: --- -By using variables, you can alter application behavior without recompiling your SpinApp. When running in Kubernetes, you can either provide constant values for variables, or reference them from Kubernetes primitives such as `ConfigMaps` and `Secrets`. This tutorial guides your through the process of assigning variables to your `SpinApp`. +By using variables, you can alter application behavior without recompiling your SpinApp. When +running in Kubernetes, you can either provide constant values for variables, or reference them from +Kubernetes primitives such as `ConfigMaps` and `Secrets`. This tutorial guides your through the +process of assigning variables to your `SpinApp`. -> Note: If you'd like to learn how to configure your application with an external variable provider like [Vault](https://vaultproject.io) or [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault), see the [External Variable Provider guide](./external-variable-providers.md) +> Note: If you'd like to learn how to configure your application with an external variable provider +> like [Vault](https://vaultproject.io) or [Azure Key +> Vault](https://azure.microsoft.com/en-us/products/key-vault), see the [External Variable Provider +> guide](./external-variable-providers.md) ## Build and Store SpinApp in an OCI Registry -We’re going to build the SpinApp and store it inside of a [ttl.sh](http://ttl.sh) registry. Move into the [apps/variable-explorer](https://github.com/spinkube/spin-operator/blob/main/apps/variable-explorer) directory and build the SpinApp we’ve provided: +We’re going to build the SpinApp and store it inside of a [ttl.sh](http://ttl.sh) registry. Move +into the +[apps/variable-explorer](https://github.com/spinkube/spin-operator/blob/main/apps/variable-explorer) +directory and build the SpinApp we’ve provided: ```bash # Build and publish the sample app @@ -25,9 +34,14 @@ spin build spin registry push ttl.sh/variable-explorer:1h ``` -Note that the tag at the end of [ttl.sh/variable-explorer:1h](http://ttl.sh/variable-explorer:1h) indicates how long the image will last e.g. `1h` (1 hour). The maximum is `24h` and you will need to repush if ttl exceeds 24 hours. +Note that the tag at the end of [ttl.sh/variable-explorer:1h](http://ttl.sh/variable-explorer:1h) +indicates how long the image will last e.g. `1h` (1 hour). The maximum is `24h` and you will need to +repush if ttl exceeds 24 hours. -For demonstration purposes, we use the [variable explorer](https://github.com/spinkube/spin-operator/blob/main/apps/variable-explorer) sample app. It reads three different variables (`log_level`, `platform_name` and `db_password`) and prints their values to the `STDOUT` stream as shown in the following snippet: +For demonstration purposes, we use the [variable +explorer](https://github.com/spinkube/spin-operator/blob/main/apps/variable-explorer) sample app. It +reads three different variables (`log_level`, `platform_name` and `db_password`) and prints their +values to the `STDOUT` stream as shown in the following snippet: ```rust let log_level = variables::get("log_level")?; @@ -39,7 +53,8 @@ println!("# Platform name: {}", platform_name); println!("# DB Password: {}", db_password); ``` -Those variables are defined as part of the Spin manifest (`spin.toml`), and access to them is granted to the `variable-explorer` component: +Those variables are defined as part of the Spin manifest (`spin.toml`), and access to them is +granted to the `variable-explorer` component: ```toml [variables] @@ -53,11 +68,14 @@ platform_name = "{{ platform_name }}" db_password = "{{ db_password }}" ``` -For further reading on defining variables in the Spin manifest, see the [Spin Application Manifest Reference](https://developer.fermyon.com/spin/v2/manifest-reference#the-variables-table). +For further reading on defining variables in the Spin manifest, see the [Spin Application Manifest +Reference](https://developer.fermyon.com/spin/v2/manifest-reference#the-variables-table). ## Configuration data in Kubernetes -In Kubernetes, you use `ConfigMaps` for storing non-sensitive, and `Secrets` for storing sensitive configuration data. The deployment manifest (`config/samples/variable-explorer.yaml`) contains specifications for both a `ConfigMap` and a `Secret`: +In Kubernetes, you use `ConfigMaps` for storing non-sensitive, and `Secrets` for storing sensitive +configuration data. The deployment manifest (`config/samples/variable-explorer.yaml`) contains +specifications for both a `ConfigMap` and a `Secret`: ```yaml kind: ConfigMap @@ -83,9 +101,12 @@ When creating a `SpinApp`, you can choose from different approaches for specifyi 2. Loading configuration values from ConfigMaps 3. Loading configuration values from Secrets -The `SpinApp` specification contains the `variables` array, that you use for specifying variables (See `kubectl explain spinapp.spec.variables`). +The `SpinApp` specification contains the `variables` array, that you use for specifying variables +(See `kubectl explain spinapp.spec.variables`). -The deployment manifest (`config/samples/variable-explorer.yaml`) specifies a static value for `platform_name`. The value of `log_level` is read from the `ConfigMap` called `spinapp-cfg`, and the `db_password` is read from the `Secret` called `spinapp-secret`: +The deployment manifest (`config/samples/variable-explorer.yaml`) specifies a static value for +`platform_name`. The value of `log_level` is read from the `ConfigMap` called `spinapp-cfg`, and the +`db_password` is read from the `Secret` called `spinapp-secret`: ```yaml kind: SpinApp @@ -113,7 +134,9 @@ spec: optional: false ``` -As the deployment manifest outlines, you can use the `optional` property - as you would do when specifying environment variables for a regular Kubernetes `Pod` - to control if Kubernetes should prevent starting the SpinApp, if the referenced configuration source does not exist. +As the deployment manifest outlines, you can use the `optional` property - as you would do when +specifying environment variables for a regular Kubernetes `Pod` - to control if Kubernetes should +prevent starting the SpinApp, if the referenced configuration source does not exist. You can deploy all resources by executing the following command: @@ -127,7 +150,8 @@ spinapp.core.spinoperator.dev/variable-explorer created ## Inspecting runtime logs of your SpinApp -To verify that all variables are passed correctly to the SpinApp, you can configure port forwarding from your local machine to the corresponding Kubernetes `Service`: +To verify that all variables are passed correctly to the SpinApp, you can configure port forwarding +from your local machine to the corresponding Kubernetes `Service`: ```bash kubectl port-forward services/variable-explorer 8080:80 @@ -136,7 +160,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -When port forwarding is established, you can send an HTTP request to the variable-explorer from within an additional terminal session: +When port forwarding is established, you can send an HTTP request to the variable-explorer from +within an additional terminal session: ```bash curl http://localhost:8080 diff --git a/content/en/docs/topics/autoscaling/autoscaling.md b/content/en/docs/topics/autoscaling/autoscaling.md index 3433f2cc..b98f8425 100644 --- a/content/en/docs/topics/autoscaling/autoscaling.md +++ b/content/en/docs/topics/autoscaling/autoscaling.md @@ -44,8 +44,8 @@ of your application. #### Horizontal Pod Autoscaling (HPA) Horizontal Pod Autoscaler (HPA) scales Kubernetes pods based on CPU or memory utilization. This HPA -scaling can be implemented via the Kubernetes plugin for Spin by setting the `--autoscaler hpa` option. This -page deals exclusively with autoscaling via the Kubernetes plugin for Spin. +scaling can be implemented via the Kubernetes plugin for Spin by setting the `--autoscaler hpa` +option. This page deals exclusively with autoscaling via the Kubernetes plugin for Spin. ```sh spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi @@ -69,8 +69,8 @@ spin kube scaffold --from user-name/app-name:latest --autoscaler keda --cpu-limi Using KEDA to autoscale your Spin applications requires the installation of the [KEDA runtime](https://keda.sh/) into your Kubernetes cluster. For more information about scaling with -KEDA in general, please see the Spin Operator's [Scaling with KEDA -section]({{< ref "scaling-with-keda" >}}) +KEDA in general, please see the Spin Operator's [Scaling with KEDA section]({{< ref +"scaling-with-keda" >}}) ### Setting min/max replicas diff --git a/content/en/docs/topics/autoscaling/scaling-with-hpa.md b/content/en/docs/topics/autoscaling/scaling-with-hpa.md index 01af25aa..f85ae766 100644 --- a/content/en/docs/topics/autoscaling/scaling-with-hpa.md +++ b/content/en/docs/topics/autoscaling/scaling-with-hpa.md @@ -8,7 +8,11 @@ aliases: - /docs/spin-operator/tutorials/scaling-with-hpa --- -Horizontal scaling, in the Kubernetes sense, means deploying more pods to meet demand (different from vertical scaling whereby more memory and CPU resources are assigned to already running pods). In this tutorial, we configure [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) to dynamically scale the instance count of our SpinApps to meet the demand. +Horizontal scaling, in the Kubernetes sense, means deploying more pods to meet demand (different +from vertical scaling whereby more memory and CPU resources are assigned to already running pods). +In this tutorial, we configure +[HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) to dynamically +scale the instance count of our SpinApps to meet the demand. ## Prerequisites @@ -18,13 +22,17 @@ Ensure you have the following tools installed: - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI - [k3d](https://k3d.io) - a lightweight Kubernetes distribution that runs on Docker - [Helm](https://helm.sh) - the package manager for Kubernetes -- [Bombardier](https://pkg.go.dev/github.com/codesenberg/bombardier) - cross-platform HTTP benchmarking CLI +- [Bombardier](https://pkg.go.dev/github.com/codesenberg/bombardier) - cross-platform HTTP + benchmarking CLI -> We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these steps to configure HPA autoscaling on your desired Kubernetes environment. +> We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these +> steps to configure HPA autoscaling on your desired Kubernetes environment. ## Setting Up Kubernetes Cluster -Run the following command to create a Kubernetes cluster that has [the containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) pre-requisites installed: If you have a Kubernetes cluster already, please feel free to use it: +Run the following command to create a Kubernetes cluster that has [the +containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) pre-requisites installed: If +you have a Kubernetes cluster already, please feel free to use it: ```console k3d cluster create wasm-cluster-scale \ @@ -35,7 +43,10 @@ k3d cluster create wasm-cluster-scale \ ### Deploying Spin Operator and its dependencies -First, you have to install [cert-manager](https://github.com/cert-manager/cert-manager) to automatically provision and manage TLS certificates (used by Spin Operator's admission webhook system). For detailed installation instructions see [the cert-manager documentation](https://cert-manager.io/docs/installation/). +First, you have to install [cert-manager](https://github.com/cert-manager/cert-manager) to +automatically provision and manage TLS certificates (used by Spin Operator's admission webhook +system). For detailed installation instructions see [the cert-manager +documentation](https://cert-manager.io/docs/installation/). ```console # Install cert-manager CRDs @@ -53,9 +64,13 @@ helm install \ --version v1.14.3 ``` -Next, run the following commands to install the Spin [Runtime Class]({{}}) and Spin Operator [Custom Resource Definitions (CRDs)]({{}}): +Next, run the following commands to install the Spin [Runtime Class]({{}}) and Spin Operator [Custom Resource Definitions (CRDs)]({{}}): -> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` that matches nodes that have the shim installed. However, in the K3d example, they're installed on every node. +> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` +> that matches nodes that have the shim installed. However, in the K3d example, they're installed on +> every node. ```console # Install the RuntimeClass @@ -65,7 +80,8 @@ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0. kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.crds.yaml ``` -Lastly, install Spin Operator using `helm` and the [shim executor]({{< ref "glossary#spin-app-executor-crd" >}}) with the following commands: +Lastly, install Spin Operator using `helm` and the [shim executor]({{< ref +"glossary#spin-app-executor-crd" >}}) with the following commands: ```console # Install Spin Operator @@ -80,11 +96,13 @@ helm install spin-operator \ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.shim-executor.yaml ``` -Great, now you have Spin Operator up and running on your cluster. This means you’re set to create and deploy SpinApps later on in the tutorial. +Great, now you have Spin Operator up and running on your cluster. This means you’re set to create +and deploy SpinApps later on in the tutorial. ## Set Up Ingress -Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can reach your SpinApp once we’ve created it in future steps: +Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can +reach your SpinApp once we’ve created it in future steps: ```console # Setup ingress following this tutorial https://k3d.io/v5.4.6/usage/exposing_services/ @@ -113,9 +131,17 @@ Hit enter to create the ingress resource. ## Deploy Spin App and HorizontalPodAutoscaler (HPA) -Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the Spin App in the [apps/cpu-load-gen](https://github.com/spinkube/spin-operator/tree/main/apps/cpu-load-gen) folder of the Spin Operator repository. +Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the +Spin App in the +[apps/cpu-load-gen](https://github.com/spinkube/spin-operator/tree/main/apps/cpu-load-gen) folder of +the Spin Operator repository. -We can take a look at the SpinApp and HPA definitions in our deployment file below/. As you can see, we have set our `resources` -> `limits` to `500m` of `cpu` and `500Mi` of `memory` per Spin application and we will scale the instance count when we’ve reached a 50% utilization in `cpu` and `memory`. We’ve also defined support a maximum [replica](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#replicas) count of 10 and a minimum replica count of 1: +We can take a look at the SpinApp and HPA definitions in our deployment file below/. As you can see, +we have set our `resources` -> `limits` to `500m` of `cpu` and `500Mi` of `memory` per Spin +application and we will scale the instance count when we’ve reached a 50% utilization in `cpu` and +`memory`. We’ve also defined support a maximum +[replica](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#replicas) count of +10 and a minimum replica count of 1: ```yaml apiVersion: core.spinoperator.dev/v1alpha1 @@ -154,9 +180,12 @@ spec: ``` For more information about HPA, please visit the following links: -- [Kubernetes Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) -- [Kubernetes HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) -- [HPA Container Resource Metrics](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics) +- [Kubernetes Horizontal Pod + Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) +- [Kubernetes HorizontalPodAutoscaler + Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) +- [HPA Container Resource + Metrics](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics) Below is an example of the configuration to scale resources: @@ -197,7 +226,8 @@ spec: averageUtilization: 50 ``` -Let’s deploy the SpinApp and the HPA instance onto our cluster (using the above `.yaml` configuration). To apply the above configuration we use the following `kubectl apply` command: +Let’s deploy the SpinApp and the HPA instance onto our cluster (using the above `.yaml` +configuration). To apply the above configuration we use the following `kubectl apply` command: ```console # Install SpinApp and HPA @@ -220,11 +250,18 @@ NAME REFERENCE TARGETS MINPODS MAXPODS REPL spinapp-autoscaler Deployment/hpa-spinapp 6%/50% 1 10 1 97m ``` -> Please note: The [Kubernetes Plugin for Spin](https://www.spinkube.dev/docs/spin-plugin-kube/installation/) is a tool designed for Kubernetes integration with the Spin command-line interface. The [Kubernetes Plugin for Spin has a scaling tutorial](https://www.spinkube.dev/docs/spin-plugin-kube/tutorials/autoscaler-support/) that demonstrates how to use the `spin kube` command to tell Kubernetes when to scale your Spin application up or down based on demand). +> Please note: The [Kubernetes Plugin for +> Spin](https://www.spinkube.dev/docs/spin-plugin-kube/installation/) is a tool designed for +> Kubernetes integration with the Spin command-line interface. The [Kubernetes Plugin for Spin has a +> scaling tutorial](https://www.spinkube.dev/docs/spin-plugin-kube/tutorials/autoscaler-support/) +> that demonstrates how to use the `spin kube` command to tell Kubernetes when to scale your Spin +> application up or down based on demand). ## Generate Load to Test Autoscale -Now let’s use Bombardier to generate traffic to test how well HPA scales our SpinApp. The following Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). If a request is not responded to within 5 seconds that request will timeout: +Now let’s use Bombardier to generate traffic to test how well HPA scales our SpinApp. The following +Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). +If a request is not responded to within 5 seconds that request will timeout: ```console # Generate a bunch of load diff --git a/content/en/docs/topics/autoscaling/scaling-with-keda.md b/content/en/docs/topics/autoscaling/scaling-with-keda.md index 0a878290..70d2707f 100644 --- a/content/en/docs/topics/autoscaling/scaling-with-keda.md +++ b/content/en/docs/topics/autoscaling/scaling-with-keda.md @@ -8,7 +8,11 @@ aliases: - /docs/spin-operator/tutorials/scaling-with-keda --- -[KEDA](https://keda.sh) extends Kubernetes to provide event-driven scaling capabilities, allowing it to react to events from Kubernetes internal and external sources using [KEDA scalers](https://keda.sh/docs/2.13/scalers/). KEDA provides a wide variety of scalers to define scaling behavior base on sources like CPU, Memory, Azure Event Hubs, Kafka, RabbitMQ, and more. We use a `ScaledObject` to dynamically scale the instance count of our SpinApp to meet the demand. +[KEDA](https://keda.sh) extends Kubernetes to provide event-driven scaling capabilities, allowing it +to react to events from Kubernetes internal and external sources using [KEDA +scalers](https://keda.sh/docs/2.13/scalers/). KEDA provides a wide variety of scalers to define +scaling behavior base on sources like CPU, Memory, Azure Event Hubs, Kafka, RabbitMQ, and more. We +use a `ScaledObject` to dynamically scale the instance count of our SpinApp to meet the demand. ## Prerequisites @@ -18,13 +22,17 @@ Please ensure the following tools are installed on your local machine: - [Helm](https://helm.sh) - the package manager for Kubernetes - [Docker](https://docs.docker.com/engine/install/) - for running k3d - [k3d](https://k3d.io) - a lightweight Kubernetes distribution that runs on Docker -- [Bombardier](https://pkg.go.dev/github.com/codesenberg/bombardier) - cross-platform HTTP benchmarking CLI +- [Bombardier](https://pkg.go.dev/github.com/codesenberg/bombardier) - cross-platform HTTP + benchmarking CLI -> We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these steps to configure KEDA autoscaling on your desired Kubernetes environment. +> We use k3d to run a Kubernetes cluster locally as part of this tutorial, but you can follow these +> steps to configure KEDA autoscaling on your desired Kubernetes environment. ## Setting Up Kubernetes Cluster -Run the following command to create a Kubernetes cluster that has [the containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) pre-requisites installed: If you have a Kubernetes cluster already, please feel free to use it: +Run the following command to create a Kubernetes cluster that has [the +containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin) pre-requisites installed: If +you have a Kubernetes cluster already, please feel free to use it: ```console k3d cluster create wasm-cluster-scale \ @@ -35,7 +43,10 @@ k3d cluster create wasm-cluster-scale \ ### Deploying Spin Operator and its dependencies -First, you have to install [cert-manager](https://github.com/cert-manager/cert-manager) to automatically provision and manage TLS certificates (used by Spin Operator's admission webhook system). For detailed installation instructions see [the cert-manager documentation](https://cert-manager.io/docs/installation/). +First, you have to install [cert-manager](https://github.com/cert-manager/cert-manager) to +automatically provision and manage TLS certificates (used by Spin Operator's admission webhook +system). For detailed installation instructions see [the cert-manager +documentation](https://cert-manager.io/docs/installation/). ```console # Install cert-manager CRDs @@ -53,9 +64,13 @@ helm install \ --version v1.14.3 ``` -Next, run the following commands to install the Spin [Runtime Class]({{}}) and Spin Operator [Custom Resource Definitions (CRDs)]({{}}): +Next, run the following commands to install the Spin [Runtime Class]({{}}) and Spin Operator [Custom Resource Definitions (CRDs)]({{}}): -> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` that matches nodes that have the shim installed. However, in the K3d example, they're installed on every node. +> Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector` +> that matches nodes that have the shim installed. However, in the K3d example, they're installed on +> every node. ```console # Install the RuntimeClass @@ -65,7 +80,8 @@ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0. kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.crds.yaml ``` -Lastly, install Spin Operator using `helm` and the [shim executor]({{< ref "glossary#spin-app-executor-crd" >}}) with the following commands: +Lastly, install Spin Operator using `helm` and the [shim executor]({{< ref +"glossary#spin-app-executor-crd" >}}) with the following commands: ```console # Install Spin Operator @@ -80,11 +96,13 @@ helm install spin-operator \ kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.2.0/spin-operator.shim-executor.yaml ``` -Great, now you have Spin Operator up and running on your cluster. This means you’re set to create and deploy SpinApps later on in the tutorial. +Great, now you have Spin Operator up and running on your cluster. This means you’re set to create +and deploy SpinApps later on in the tutorial. ## Set Up Ingress -Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can reach your Spin App once we’ve created it in future steps: +Use the following command to set up ingress on your Kubernetes cluster. This ensures traffic can +reach your Spin App once we’ve created it in future steps: ```console # Setup ingress following this tutorial https://k3d.io/v5.4.6/usage/exposing_services/ @@ -113,7 +131,8 @@ Hit enter to create the ingress resource. ## Setting Up KEDA -Use the following command to setup KEDA on your Kubernetes cluster using Helm. Different deployment methods are described at [Deploying KEDA on keda.sh](https://keda.sh/docs/2.13/deploy/): +Use the following command to setup KEDA on your Kubernetes cluster using Helm. Different deployment +methods are described at [Deploying KEDA on keda.sh](https://keda.sh/docs/2.13/deploy/): ```console # Add the Helm repository @@ -128,10 +147,16 @@ helm install keda kedacore/keda --namespace keda --create-namespace ## Deploy Spin App and the KEDA ScaledObject -Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the Spin App in the [apps/cpu-load-gen](https://github.com/spinkube/spin-operator/tree/main/apps/cpu-load-gen) folder of the Spin Operator repository. +Next up we’re going to deploy the Spin App we will be scaling. You can find the source code of the +Spin App in the +[apps/cpu-load-gen](https://github.com/spinkube/spin-operator/tree/main/apps/cpu-load-gen) folder of +the Spin Operator repository. -We can take a look at the `SpinApp` and the KEDA `ScaledObject` definitions in our deployment files below. As you can see, we have explicitly specified resource limits to `500m` of `cpu` (`spec.resources.limits.cpu`) and `500Mi` of `memory` (`spec.resources.limits.memory`) per `SpinApp`: +We can take a look at the `SpinApp` and the KEDA `ScaledObject` definitions in our deployment files +below. As you can see, we have explicitly specified resource limits to `500m` of `cpu` +(`spec.resources.limits.cpu`) and `500Mi` of `memory` (`spec.resources.limits.memory`) per +`SpinApp`: ```yaml # https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/keda-app.yaml @@ -153,7 +178,9 @@ spec: --- ``` -We will scale the instance count when we’ve reached a 50% utilization in `cpu` (`spec.triggers[cpu].metadata.value`). We’ve also instructed KEDA to scale our SpinApp horizontally within the range of 1 (`spec.minReplicaCount`) and 20 (`spec.maxReplicaCount`).: +We will scale the instance count when we’ve reached a 50% utilization in `cpu` +(`spec.triggers[cpu].metadata.value`). We’ve also instructed KEDA to scale our SpinApp horizontally +within the range of 1 (`spec.minReplicaCount`) and 20 (`spec.maxReplicaCount`).: ```yaml # https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/keda-scaledobject.yaml @@ -173,9 +200,14 @@ spec: value: "50" ``` -> The Kubernetes documentation is the place to learn more about [limits and requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). Consult the KEDA documentation to learn more about [ScaledObject](https://keda.sh/docs/2.13/concepts/scaling-deployments/#scaledobject-spec) and [KEDA's built-in scalers](https://keda.sh/docs/2.13/scalers/). +> The Kubernetes documentation is the place to learn more about [limits and +> requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). +> Consult the KEDA documentation to learn more about +> [ScaledObject](https://keda.sh/docs/2.13/concepts/scaling-deployments/#scaledobject-spec) and +> [KEDA's built-in scalers](https://keda.sh/docs/2.13/scalers/). -Let’s deploy the SpinApp and the KEDA ScaledObject instance onto our cluster with the following command: +Let’s deploy the SpinApp and the KEDA ScaledObject instance onto our cluster with the following +command: ```console # Deploy the SpinApp @@ -207,7 +239,9 @@ cpu-scaling apps/v1.Deployment keda-spinapp 1 20 cpu True ## Generate Load to Test Autoscale -Now let’s use Bombardier to generate traffic to test how well KEDA scales our SpinApp. The following Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). If a request is not responded to within 5 seconds that request will timeout: +Now let’s use Bombardier to generate traffic to test how well KEDA scales our SpinApp. The following +Bombardier command will attempt to establish 40 connections during a period of 3 minutes (or less). +If a request is not responded to within 5 seconds that request will timeout: ```console # Generate a bunch of load diff --git a/content/en/docs/topics/connecting-to-a-sqlite-database.md b/content/en/docs/topics/connecting-to-a-sqlite-database.md index ba9ce61f..0fae0dd0 100644 --- a/content/en/docs/topics/connecting-to-a-sqlite-database.md +++ b/content/en/docs/topics/connecting-to-a-sqlite-database.md @@ -7,21 +7,33 @@ tags: [Tutorials] weight: 14 --- -Spin applications can utilize a [standardized API for persisting data in a SQLite database](https://developer.fermyon.com/spin/v2/sqlite-api-guide). A default database is created by the Spin runtime on the local filesystem, which is great for getting an application up and running. However, this on-disk solution may not be preferable for an app running in the context of SpinKube, where apps are often scaled beyond just one replica. - -Thankfully, Spin supports configuring an application with an [external SQLite database provider via runtime configuration](https://developer.fermyon.com/spin/v2/dynamic-configuration#libsql-storage-provider). External providers include any [libSQL](https://libsql.org/) databases that can be accessed over HTTPS. +Spin applications can utilize a [standardized API for persisting data in a SQLite +database](https://developer.fermyon.com/spin/v2/sqlite-api-guide). A default database is created by +the Spin runtime on the local filesystem, which is great for getting an application up and running. +However, this on-disk solution may not be preferable for an app running in the context of SpinKube, +where apps are often scaled beyond just one replica. + +Thankfully, Spin supports configuring an application with an [external SQLite database provider via +runtime +configuration](https://developer.fermyon.com/spin/v2/dynamic-configuration#libsql-storage-provider). +External providers include any [libSQL](https://libsql.org/) databases that can be accessed over +HTTPS. ## Prerequisites To follow along with this tutorial, you'll need: -- A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for more information. +- A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for + more information. - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl) - The [spin CLI](https://developer.fermyon.com/spin/v2/install ) ## Build and publish the Spin application -For this tutorial, we'll use the [HTTP CRUD Go SQLite](https://github.com/fermyon/enterprise-architectures-and-patterns/tree/main/http-crud-go-sqlite) sample application. It is a Go-based app implementing CRUD (Create, Read, Update, Delete) operations via the SQLite API. +For this tutorial, we'll use the [HTTP CRUD Go +SQLite](https://github.com/fermyon/enterprise-architectures-and-patterns/tree/main/http-crud-go-sqlite) +sample application. It is a Go-based app implementing CRUD (Create, Read, Update, Delete) operations +via the SQLite API. First, clone the repository locally and navigate to the `http-crud-go-sqlite` directory: @@ -30,7 +42,8 @@ git clone git@github.com:fermyon/enterprise-architectures-and-patterns.git cd enterprise-architectures-and-patterns/http-crud-go-sqlite ``` -Now, build and push the application to a registry you have access to. Here we'll use [ttl.sh](https://ttl.sh): +Now, build and push the application to a registry you have access to. Here we'll use +[ttl.sh](https://ttl.sh): ```bash export IMAGE_NAME=ttl.sh/$(uuidgen):1h @@ -40,9 +53,11 @@ spin registry push ${IMAGE_NAME} ## Create a LibSQL database -If you don't already have a LibSQL database that can be used over HTTPS, you can follow along as we set one up via [Turso](https://turso.tech/). +If you don't already have a LibSQL database that can be used over HTTPS, you can follow along as we +set one up via [Turso](https://turso.tech/). -Before proceeding, install the [turso CLI](https://docs.turso.tech/quickstart) and sign up for an account, if you haven't done so already. +Before proceeding, install the [turso CLI](https://docs.turso.tech/quickstart) and sign up for an +account, if you haven't done so already. Create a new database and save its HTTP URL: @@ -59,7 +74,8 @@ export DB_TOKEN=$(turso db tokens create spinkube) ## Create a Kubernetes Secret for the database token -The database token is a sensitive value and thus should be created as a Secret resource in Kubernetes: +The database token is a sensitive value and thus should be created as a Secret resource in +Kubernetes: ```bash kubectl create secret generic turso-auth --from-literal=db-token="${DB_TOKEN}" @@ -70,8 +86,11 @@ kubectl create secret generic turso-auth --from-literal=db-token="${DB_TOKEN}" You're now ready to assemble the SpinApp custom resource manifest. - Note the `image` value uses the reference you published above. -- All of the SQLite database config is set under `spec.runtimeConfig.sqliteDatabases`. See the [sqliteDatabases reference guide]({{< ref "docs/reference/spin-app#spinappspecruntimeconfigsqlitedatabasesindex" >}}) for more details. -- Here we configure the `default` database to use the `libsql` provider type and under `options` supply the database URL and auth token (via its Kubernetes secret) +- All of the SQLite database config is set under `spec.runtimeConfig.sqliteDatabases`. See the + [sqliteDatabases reference guide]({{< ref + "docs/reference/spin-app#spinappspecruntimeconfigsqlitedatabasesindex" >}}) for more details. +- Here we configure the `default` database to use the `libsql` provider type and under `options` + supply the database URL and auth token (via its Kubernetes secret) Plug the `$IMAGE_NAME` and `$DB_URL` values into the manifest below and save as `spinapp.yaml`: @@ -110,7 +129,8 @@ The Spin Operator will handle the creation of the underlying Kubernetes resource ## Test the application -Now you are ready to test the application and verify connectivity and data storage to the configured SQLite database. +Now you are ready to test the application and verify connectivity and data storage to the configured +SQLite database. Configure port forwarding from your local machine to the corresponding Kubernetes `Service`: @@ -121,7 +141,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -When port forwarding is established, you can send HTTP requests to the http-crud-go-sqlite app from within an additional terminal session. Here are a few examples to get you started. +When port forwarding is established, you can send HTTP requests to the http-crud-go-sqlite app from +within an additional terminal session. Here are a few examples to get you started. Get current items: diff --git a/content/en/docs/topics/external-variable-providers.md b/content/en/docs/topics/external-variable-providers.md index 09f92cc7..e020fd82 100644 --- a/content/en/docs/topics/external-variable-providers.md +++ b/content/en/docs/topics/external-variable-providers.md @@ -7,41 +7,62 @@ tags: [Tutorials] weight: 12 --- -In the [Assigning Variables](./assigning-variables.md) guide, you learned how to configure variables on the SpinApp via its [variables](../reference/spin-app.md#spinappspecvariablesindex) section, either by supplying values in-line or via a Kubernetes ConfigMap or Secret. +In the [Assigning Variables](./assigning-variables.md) guide, you learned how to configure variables +on the SpinApp via its [variables](../reference/spin-app.md#spinappspecvariablesindex) section, +either by supplying values in-line or via a Kubernetes ConfigMap or Secret. -You can also utilize an external service like [Vault](https://vaultproject.io) or [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault) to provide variable values for your application. This guide will show you how to use and configure both services in tandem with corresponding sample applications. +You can also utilize an external service like [Vault](https://vaultproject.io) or [Azure Key +Vault](https://azure.microsoft.com/en-us/products/key-vault) to provide variable values for your +application. This guide will show you how to use and configure both services in tandem with +corresponding sample applications. ## Prerequisites To follow along with this tutorial, you'll need: -- A Kubernetes cluster running SpinKube. See the [Installation](../install/_index.md) guides for more information. +- A Kubernetes cluster running SpinKube. See the [Installation](../install/_index.md) guides for + more information. - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl) - The [spin CLI](https://developer.fermyon.com/spin/v2/install ) -- The [kube plugin for Spin](https://github.com/spinkube/spin-plugin-kube?tab=readme-ov-file#install) +- The [kube plugin for + Spin](https://github.com/spinkube/spin-plugin-kube?tab=readme-ov-file#install) ## Supported providers -Spin currently supports [Vault](#vault-provider) and [Azure Key Vault](#azure-key-vault-provider) as external variable providers. Configuration is supplied to the application via a [Runtime Configuration file](https://developer.fermyon.com/spin/v2/dynamic-configuration#dynamic-and-runtime-application-configuration). +Spin currently supports [Vault](#vault-provider) and [Azure Key Vault](#azure-key-vault-provider) as +external variable providers. Configuration is supplied to the application via a [Runtime +Configuration +file](https://developer.fermyon.com/spin/v2/dynamic-configuration#dynamic-and-runtime-application-configuration). -In SpinKube, this configuration file can be supplied in the form of a Kubernetes secret and linked to a SpinApp via its [runtimeConfig.loadFromSecret](https://www.spinkube.dev/docs/reference/spin-app/#spinappspecruntimeconfig) section. +In SpinKube, this configuration file can be supplied in the form of a Kubernetes secret and linked +to a SpinApp via its +[runtimeConfig.loadFromSecret](https://www.spinkube.dev/docs/reference/spin-app/#spinappspecruntimeconfig) +section. -> Note: `loadFromSecret` takes precedence over any other `runtimeConfig` configuration. Thus, *all* runtime configuration must be contained in the Kubernetes secret, including [SQLite](../reference/spin-app.md#spinappspecruntimeconfigsqlitedatabasesindex), [Key Value](../reference/spin-app.md#spinappspecruntimeconfigkeyvaluestoresindex) and [LLM](../reference/spin-app.md#spinappspecruntimeconfigllmcompute) options that might otherwise be specified via their dedicated specs. +> Note: `loadFromSecret` takes precedence over any other `runtimeConfig` configuration. Thus, *all* +> runtime configuration must be contained in the Kubernetes secret, including +> [SQLite](../reference/spin-app.md#spinappspecruntimeconfigsqlitedatabasesindex), [Key +> Value](../reference/spin-app.md#spinappspecruntimeconfigkeyvaluestoresindex) and +> [LLM](../reference/spin-app.md#spinappspecruntimeconfigllmcompute) options that might otherwise be +> specified via their dedicated specs. Let's look at examples utilizing specific provider configuration next. # Vault provider -[Vault](https://vaultproject.io) is a popular choice for storing secrets and serving as a secure key-value store. +[Vault](https://vaultproject.io) is a popular choice for storing secrets and serving as a secure +key-value store. This guide assumes you have: - + - A [Vault cluster](https://www.vaultproject.io/) - The [vault CLI](https://developer.hashicorp.com/vault/docs/install) ### Build and publish the Spin application -We'll use the [variable explorer app](https://github.com/spinkube/spin-operator/tree/main/apps/variable-explorer) to test this integration. +We'll use the [variable explorer +app](https://github.com/spinkube/spin-operator/tree/main/apps/variable-explorer) to test this +integration. First, clone the repository locally and navigate to the `variable-explorer` directory: @@ -50,7 +71,8 @@ git clone git@github.com:spinkube/spin-operator.git cd apps/variable-explorer ``` -Now, build and push the application to a registry you have access to. Here we'll use [ttl.sh](https://ttl.sh): +Now, build and push the application to a registry you have access to. Here we'll use +[ttl.sh](https://ttl.sh): ```bash spin build @@ -69,11 +91,15 @@ token = "my_token" mount = "admin/secret" ``` -To use this sample, you'll want to update the `url` and `token` fields with values applicable to your Vault cluster. The `mount` value will depend on the Vault namespace and `kv-v2` secrets engine name. In this sample, the namespace is `admin` and the engine is named `secret`, eg by running `vault secrets enable --path=secret kv-v2`. +To use this sample, you'll want to update the `url` and `token` fields with values applicable to +your Vault cluster. The `mount` value will depend on the Vault namespace and `kv-v2` secrets engine +name. In this sample, the namespace is `admin` and the engine is named `secret`, eg by running +`vault secrets enable --path=secret kv-v2`. ### Create the secrets in Vault -Create the `log_level`, `platform_name` and `db_password` secrets used by the variable-explorer application in Vault: +Create the `log_level`, `platform_name` and `db_password` secrets used by the variable-explorer +application in Vault: ```bash vault kv put secret/log_level value=INFO @@ -83,7 +109,8 @@ vault kv put secret/db_password value=secret_sauce ### Create the SpinApp and Secret -Next, scaffold the SpinApp and Secret resource (containing the `runtime-config.toml` data) together in one go via the `kube` plugin: +Next, scaffold the SpinApp and Secret resource (containing the `runtime-config.toml` data) together +in one go via the `kube` plugin: ```bash spin kube scaffold -f ttl.sh/variable-explorer:1h -c runtime-config.toml -o scaffold.yaml @@ -97,7 +124,8 @@ kubectl apply -f scaffold.yaml ### Test the application -You are now ready to test the application and verify that all variables are passed correctly to the SpinApp from the Vault provider. +You are now ready to test the application and verify that all variables are passed correctly to the +SpinApp from the Vault provider. Configure port forwarding from your local machine to the corresponding Kubernetes `Service`: @@ -108,7 +136,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -When port forwarding is established, you can send an HTTP request to the variable-explorer from within an additional terminal session: +When port forwarding is established, you can send an HTTP request to the variable-explorer from +within an additional terminal session: ```bash curl http://localhost:8080 @@ -127,7 +156,8 @@ kubectl logs -l core.spinoperator.dev/app-name=variable-explorer # Azure Key Vault provider -[Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault) is a secure secret store for distributed applications hosted on the [Azure](https://azure.microsoft.com) platform. +[Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault) is a secure secret store for +distributed applications hosted on the [Azure](https://azure.microsoft.com) platform. This guide assumes you have: @@ -136,7 +166,9 @@ This guide assumes you have: ### Build and publish the Spin application -We'll use the [Azure Key Vault Provider](https://github.com/fermyon/enterprise-architectures-and-patterns/tree/main/application-variable-providers/azure-key-vault-provider) sample application for this exercise. +We'll use the [Azure Key Vault +Provider](https://github.com/fermyon/enterprise-architectures-and-patterns/tree/main/application-variable-providers/azure-key-vault-provider) +sample application for this exercise. First, clone the repository locally and navigate to the `azure-key-vault-provider` directory: @@ -145,14 +177,16 @@ git clone git@github.com:fermyon/enterprise-architectures-and-patterns.git cd enterprise-architectures-and-patterns/application-variable-providers/azure-key-vault-provider ``` -Now, build and push the application to a registry you have access to. Here we'll use [ttl.sh](https://ttl.sh): +Now, build and push the application to a registry you have access to. Here we'll use +[ttl.sh](https://ttl.sh): ```bash spin build spin registry push ttl.sh/azure-key-vault-provider:1h ``` -The next steps will guide you in creating and configuring an Azure Key Vault and populating the runtime configuration file with connection credentials. +The next steps will guide you in creating and configuring an Azure Key Vault and populating the +runtime configuration file with connection credentials. ### Deploy Azure Key Vault @@ -206,7 +240,8 @@ az role assignment create --assignee $CLIENT_ID \ ### Create the `runtime-config.toml` file -Create a `runtime-config.toml` file with the following contents, substituting in the values for `KV_NAME`, `CLIENT_ID`, `CLIENT_SECRET` and `TENANT_ID` from the previous steps. +Create a `runtime-config.toml` file with the following contents, substituting in the values for +`KV_NAME`, `CLIENT_ID`, `CLIENT_SECRET` and `TENANT_ID` from the previous steps. ```toml [[config_provider]] @@ -220,7 +255,8 @@ authority_host = "AzurePublicCloud" ### Create the SpinApp and Secret -Scaffold the SpinApp and Secret resource (containing the `runtime-config.toml` data) together in one go via the `kube` plugin: +Scaffold the SpinApp and Secret resource (containing the `runtime-config.toml` data) together in one +go via the `kube` plugin: ```bash spin kube scaffold -f ttl.sh/azure-key-vault-provider:1h -c runtime-config.toml -o scaffold.yaml @@ -234,7 +270,8 @@ kubectl apply -f scaffold.yaml ### Test the application -Now you are ready to test the application and verify that the secret resolves its value from Azure Key Vault. +Now you are ready to test the application and verify that the secret resolves its value from Azure +Key Vault. Configure port forwarding from your local machine to the corresponding Kubernetes `Service`: @@ -245,7 +282,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -When port forwarding is established, you can send an HTTP request to the azure-key-vault-provider app from within an additional terminal session: +When port forwarding is established, you can send an HTTP request to the azure-key-vault-provider +app from within an additional terminal session: ```bash curl http://localhost:8080 diff --git a/content/en/docs/topics/packaging.md b/content/en/docs/topics/packaging.md index 9caad70a..bdfe1448 100644 --- a/content/en/docs/topics/packaging.md +++ b/content/en/docs/topics/packaging.md @@ -9,7 +9,8 @@ aliases: - /docs/spin-operator/tutorials/package-and-deploy --- -This article explains how Spin Apps are packaged and distributed via both public and private registries. You will learn how to: +This article explains how Spin Apps are packaged and distributed via both public and private +registries. You will learn how to: - Package and distribute Spin Apps - Deploy Spin Apps @@ -27,7 +28,9 @@ For this tutorial in particular, you need ## Creating a new Spin App -You use the `spin` CLI, to create a new Spin App. The `spin` CLI provides different templates, which you can use to quickly create different kinds of Spin Apps. For demonstration purposes, you will use the `http-go` template to create a simple Spin App. +You use the `spin` CLI, to create a new Spin App. The `spin` CLI provides different templates, which +you can use to quickly create different kinds of Spin Apps. For demonstration purposes, you will use +the `http-go` template to create a simple Spin App. ```shell # Create a new Spin App using the http-go template @@ -37,7 +40,8 @@ spin new --accept-defaults -t http-go hello-spin cd hello-spin ``` -The `spin` CLI created all necessary files within `hello-spin`. Besides the Spin Manifest (`spin.toml`), you can find the actual implementation of the app in `main.go`: +The `spin` CLI created all necessary files within `hello-spin`. Besides the Spin Manifest +(`spin.toml`), you can find the actual implementation of the app in `main.go`: ```go package main @@ -59,26 +63,36 @@ func init() { func main() {} ``` -This implementation will respond to any incoming HTTP request, and return an HTTP response with a status code of 200 (`Ok`) and send `Hello Fermyon` as the response body. +This implementation will respond to any incoming HTTP request, and return an HTTP response with a +status code of 200 (`Ok`) and send `Hello Fermyon` as the response body. -You can test the app on your local machine by invoking the `spin up` command from within the `hello-spin` folder. +You can test the app on your local machine by invoking the `spin up` command from within the +`hello-spin` folder. ## Packaging and Distributing Spin Apps -Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can be distributed using any registry that implements the [Open Container Initiative Distribution Specification](https://github.com/opencontainers/distribution-spec) (a.k.a. "OCI Distribution Spec"). +Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can +be distributed using any registry that implements the [Open Container Initiative Distribution +Specification](https://github.com/opencontainers/distribution-spec) (a.k.a. "OCI Distribution +Spec"). -The `spin` CLI simplifies packaging and distribution of Spin Apps and provides an atomic command for this (`spin registry push`). You can package and distribute the `hello-spin` app that you created as part of the previous section like this: +The `spin` CLI simplifies packaging and distribution of Spin Apps and provides an atomic command for +this (`spin registry push`). You can package and distribute the `hello-spin` app that you created as +part of the previous section like this: ```shell # Package and Distribute the hello-spin app spin registry push --build ttl.sh/hello-spin:24h ``` -> It is a good practice to add the `--build` flag to `spin registry push`. It prevents you from accidentally pushing an outdated version of your Spin App to your registry of choice. +> It is a good practice to add the `--build` flag to `spin registry push`. It prevents you from +> accidentally pushing an outdated version of your Spin App to your registry of choice. ## Deploying Spin Apps -To deploy Spin Apps to a Kubernetes cluster which has Spin Operator running, you use the `kube` plugin for `spin`. Use the `spin kube deploy` command as shown here to deploy the `hello-spin` app to your Kubernetes cluster: +To deploy Spin Apps to a Kubernetes cluster which has Spin Operator running, you use the `kube` +plugin for `spin`. Use the `spin kube deploy` command as shown here to deploy the `hello-spin` app +to your Kubernetes cluster: ```shell # Deploy the hello-spin app to your Kubernetes Cluster @@ -89,7 +103,9 @@ spinapp.core.spinoperator.dev/hello-spin created ## Scaffolding Spin Apps -In the previous section, you deployed the `hello-spin` app using the `spin kube deploy` command. Although this is handy, you may want to inspect, or alter the Kubernetes manifests before applying them. You use the `spin kube scaffold` command to generate Kubernetes manifests: +In the previous section, you deployed the `hello-spin` app using the `spin kube deploy` command. +Although this is handy, you may want to inspect, or alter the Kubernetes manifests before applying +them. You use the `spin kube scaffold` command to generate Kubernetes manifests: ```shell spin kube scaffold --from ttl.sh/hello-spin:24h @@ -102,7 +118,8 @@ spec: replicas: 2 ``` -By default, the command will print all Kubernetes menifests to `STDOUT`. Alternatively, you can specify the `out` argument to store the manifests to a file: +By default, the command will print all Kubernetes menifests to `STDOUT`. Alternatively, you can +specify the `out` argument to store the manifests to a file: ```shell # Scaffold manifests to spinapp.yaml @@ -122,9 +139,14 @@ spec: ## Distributing and Deploying Spin Apps via private registries -It is quite common to distribute Spin Apps through private registries that require some sort of authentication. To publish a Spin App to a private registry, you have to authenticate using the `spin registry login` command. +It is quite common to distribute Spin Apps through private registries that require some sort of +authentication. To publish a Spin App to a private registry, you have to authenticate using the +`spin registry login` command. -For demonstration purposes, you will now distribute the Spin App via GitHub Container Registry (GHCR). You can follow [this guide by GitHub](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic) to create a new personal access token (PAT), which is required for authentication. +For demonstration purposes, you will now distribute the Spin App via GitHub Container Registry +(GHCR). You can follow [this guide by +GitHub](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic) +to create a new personal access token (PAT), which is required for authentication. ```shell # Store PAT and GitHub username as environment variables @@ -147,7 +169,9 @@ Pushing app to the Registry... Pushed with digest sha256:1611d51b296574f74b99df1391e2dc65f210e9ea695fbbce34d770ecfcfba581 ``` -In Kubernetes you store authentication information as secret of type `docker-registry`. The following snippet shows how to create such a secret with `kubectl` leveraging the environment variables, you specified in the previous section: +In Kubernetes you store authentication information as secret of type `docker-registry`. The +following snippet shows how to create such a secret with `kubectl` leveraging the environment +variables, you specified in the previous section: ```shell # Create Secret in Kubernetes @@ -167,7 +191,9 @@ spin kube scaffold --from ghcr.io/$GH_USER/hello-spin:0.0.1 \ --out spinapp.yaml ``` -Before deploying the manifest with `kubectl`, update `spinapp.yaml` and link the `ghcr` secret you previously created using the `imagePullSecrets` property. Your `SpinApp` manifest should look like this: +Before deploying the manifest with `kubectl`, update `spinapp.yaml` and link the `ghcr` secret you +previously created using the `imagePullSecrets` property. Your `SpinApp` manifest should look like +this: ```yaml apiVersion: core.spinoperator.dev/v1alpha1 @@ -182,7 +208,8 @@ spec: executor: containerd-shim-spin ``` -> `$GH_USER` should match the actual username provided while running through the previous sections of this article +> `$GH_USER` should match the actual username provided while running through the previous sections +> of this article Finally, you can deploy the app using `kubectl apply`: diff --git a/content/en/docs/topics/using-a-key-value-store.md b/content/en/docs/topics/using-a-key-value-store.md index 2c369ecb..22fb6b7a 100644 --- a/content/en/docs/topics/using-a-key-value-store.md +++ b/content/en/docs/topics/using-a-key-value-store.md @@ -7,21 +7,32 @@ tags: [Tutorials] weight: 14 --- -Spin applications can utilize a [standardized API for persisting data in a key value store](https://developer.fermyon.com/spin/v2/kv-store-api-guide). The default key value store in Spin is an SQLite database, which is great for quickly utilizing non-relational local storage without any infrastructure set-up. However, this solution may not be preferable for an app running in the context of SpinKube, where apps are often scaled beyond just one replica. +Spin applications can utilize a [standardized API for persisting data in a key value +store](https://developer.fermyon.com/spin/v2/kv-store-api-guide). The default key value store in +Spin is an SQLite database, which is great for quickly utilizing non-relational local storage +without any infrastructure set-up. However, this solution may not be preferable for an app running +in the context of SpinKube, where apps are often scaled beyond just one replica. -Thankfully, Spin supports configuring an application with an [external key value provider](https://developer.fermyon.com/spin/v2/dynamic-configuration#key-value-store-runtime-configuration). External providers include [Redis](https://redis.io/) or [Valkey](https://valkey.io/) and [Azure Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db). +Thankfully, Spin supports configuring an application with an [external key value +provider](https://developer.fermyon.com/spin/v2/dynamic-configuration#key-value-store-runtime-configuration). +External providers include [Redis](https://redis.io/) or [Valkey](https://valkey.io/) and [Azure +Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db). ## Prerequisites To follow along with this tutorial, you'll need: -- A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for more information. +- A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for + more information. - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl) - The [spin CLI](https://developer.fermyon.com/spin/v2/install ) ## Build and publish the Spin application -For this tutorial, we'll use a [Spin key/value application](https://github.com/fermyon/spin-go-sdk/tree/main/examples/key-value) written with the Go SDK. The application serves a CRUD (Create, Read, Update, Delete) API for managing key/value pairs. +For this tutorial, we'll use a [Spin key/value +application](https://github.com/fermyon/spin-go-sdk/tree/main/examples/key-value) written with the +Go SDK. The application serves a CRUD (Create, Read, Update, Delete) API for managing key/value +pairs. First, clone the repository locally and navigate to the `examples/key-value` directory: @@ -30,7 +41,8 @@ git clone git@github.com:fermyon/spin-go-sdk.git cd examples/key-value ``` -Now, build and push the application to a registry you have access to. Here we'll use [ttl.sh](https://ttl.sh): +Now, build and push the application to a registry you have access to. Here we'll use +[ttl.sh](https://ttl.sh): ```bash export IMAGE_NAME=ttl.sh/$(uuidgen):1h @@ -40,13 +52,18 @@ spin registry push ${IMAGE_NAME} ## Configure an external key value provider -Since we have access to a Kubernetes cluster already running SpinKube, we'll choose [Valkey](https://valkey.io/) for our key value provider and install this provider via Bitnami's [Valkey Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/valkey). Valkey is swappable for Redis in Spin, though note we do need to supply a URL using the `redis://` protocol rather than `valkey://`. +Since we have access to a Kubernetes cluster already running SpinKube, we'll choose +[Valkey](https://valkey.io/) for our key value provider and install this provider via Bitnami's +[Valkey Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/valkey). Valkey is swappable +for Redis in Spin, though note we do need to supply a URL using the `redis://` protocol rather than +`valkey://`. ```bash helm install valkey --namespace valkey --create-namespace oci://registry-1.docker.io/bitnamicharts/valkey ``` -As mentioned in the notes shown after successful installation, be sure to capture the valkey password for use later: +As mentioned in the notes shown after successful installation, be sure to capture the valkey +password for use later: ```bash export VALKEY_PASSWORD=$(kubectl get secret --namespace valkey valkey -o jsonpath="{.data.valkey-password}" | base64 -d) @@ -54,7 +71,9 @@ export VALKEY_PASSWORD=$(kubectl get secret --namespace valkey valkey -o jsonpat ## Create a Kubernetes Secret for the Valkey URL -The runtime configuration will require the Valkey URL so that it can connect to this provider. As this URL contains the sensitive password string, we will create it as a Secret resource in Kubernetes: +The runtime configuration will require the Valkey URL so that it can connect to this provider. As +this URL contains the sensitive password string, we will create it as a Secret resource in +Kubernetes: ```bash kubectl create secret generic kv-secret --from-literal=valkey-url="redis://:${VALKEY_PASSWORD}@valkey-master.valkey.svc.cluster.local:6379" @@ -64,8 +83,11 @@ kubectl create secret generic kv-secret --from-literal=valkey-url="redis://:${VA You're now ready to assemble the SpinApp custom resource manifest for this application. -- All of the key value config is set under `spec.runtimeConfig.keyValueStores`. See the [keyValueStores reference guide]({{< ref "docs/reference/spin-app#spinappspecruntimeconfigkeyvaluestoresindex" >}}) for more details. -- Here we configure the `default` store to use the `redis` provider type and under `options` supply the Valkey URL (via its Kubernetes secret) +- All of the key value config is set under `spec.runtimeConfig.keyValueStores`. See the + [keyValueStores reference guide]({{< ref + "docs/reference/spin-app#spinappspecruntimeconfigkeyvaluestoresindex" >}}) for more details. +- Here we configure the `default` store to use the `redis` provider type and under `options` supply + the Valkey URL (via its Kubernetes secret) Plug the `$IMAGE_NAME` and `$DB_URL` values into the manifest below and save as `spinapp.yaml`: @@ -102,7 +124,8 @@ The Spin Operator will handle the creation of the underlying Kubernetes resource ## Test the application -Now you are ready to test the application and verify connectivity and key value storage to the configured provider. +Now you are ready to test the application and verify connectivity and key value storage to the +configured provider. Configure port forwarding from your local machine to the corresponding Kubernetes `Service`: @@ -113,7 +136,8 @@ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` -When port forwarding is established, you can send HTTP requests to the application from within an additional terminal session. Here are a few examples to get you started. +When port forwarding is established, you can send HTTP requests to the application from within an +additional terminal session. Here are a few examples to get you started. Create a `test` key with value `ok!`: @@ -155,4 +179,4 @@ content-length: 12 date: Mon, 29 Jul 2024 19:59:44 GMT no such key -``` \ No newline at end of file +``` From 148425e06f09305c959aae0c8f945178e9f44c10 Mon Sep 17 00:00:00 2001 From: Matthew Fisher Date: Mon, 19 Aug 2024 08:28:23 -0700 Subject: [PATCH 2/3] blog: wrap at 100 characters Signed-off-by: Matthew Fisher --- content/en/about/index.md | 47 +++++--- .../blog/community/spinkube-kind-rd/index.md | 71 ++++++++---- content/en/blog/news/first-post/index.md | 106 ++++++++++++------ 3 files changed, 151 insertions(+), 73 deletions(-) diff --git a/content/en/about/index.md b/content/en/about/index.md index 40f2a16c..39cdcd24 100644 --- a/content/en/about/index.md +++ b/content/en/about/index.md @@ -2,10 +2,9 @@ title: SpinKube --- -{{< blocks/cover title="Welcome to SpinKube" image_anchor="top" height="full" >}} -}}"> - SpinKube Documentation - +{{< blocks/cover title="Welcome to SpinKube" image_anchor="top" height="full" >}} }}"> SpinKube Documentation

A new open source project that streamlines the experience of developing, deploying, and operating Wasm workloads on Kubernetes.

{{< blocks/link-down color="info" >}} {{< /blocks/cover >}} @@ -15,32 +14,48 @@ title: SpinKube SpinKube comprises the following open source projects. -
-
+

**Containerd Shim Spin** -The [Containerd Shim Spin repository](https://github.com/spinkube/containerd-shim-spin) provides shim implementations for running WebAssembly ([Wasm](https://webassembly.org/)) / Wasm System Interface ([WASI](https://github.com/WebAssembly/WASI)) workloads using [runwasi](https://github.com/deislabs/runwasi) as a library, whereby workloads built using the [Spin framework](https://github.com/fermyon/spin) can function similarly to container workloads in a Kubernetes environment. +The [Containerd Shim Spin repository](https://github.com/spinkube/containerd-shim-spin) provides +shim implementations for running WebAssembly ([Wasm](https://webassembly.org/)) / Wasm System +Interface ([WASI](https://github.com/WebAssembly/WASI)) workloads using +[runwasi](https://github.com/deislabs/runwasi) as a library, whereby workloads built using the [Spin +framework](https://github.com/fermyon/spin) can function similarly to container workloads in a +Kubernetes environment. -
-
+

**Runtime Class Manager** -The [Runtime Class Manager, also known as the Containerd Shim Lifecycle Operator](https://github.com/spinkube/runtime-class-manager), is designed to automate and manage the lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation, update, removal, and configuration of shims, reducing manual errors and improving reliability in managing WebAssembly (Wasm) workloads and other containerd extensions. +The [Runtime Class Manager, also known as the Containerd Shim Lifecycle +Operator](https://github.com/spinkube/runtime-class-manager), is designed to automate and manage the +lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation, +update, removal, and configuration of shims, reducing manual errors and improving reliability in +managing WebAssembly (Wasm) workloads and other containerd extensions. -
-
+

**Spin Plugin for Kubernetes** -The [Spin plugin for Kubernetes](https://github.com/spinkube/spin-plugin-kube), known as `spin kube`, faciliates the translation of existing [Spin applications](https://developer.fermyon.com/spin) into the Kubernetes custom resource that will be deployed and managed on your cluster. This plugin works by taking your spin application manifest and scaffolding it into a Kubernetes yaml, which can be deployed and managed with `kubectl`. This allows Kubernetes to manage and run Wasm workloads in a way similar to traditional container workloads. +The [Spin plugin for Kubernetes](https://github.com/spinkube/spin-plugin-kube), known as `spin +kube`, faciliates the translation of existing [Spin +applications](https://developer.fermyon.com/spin) into the Kubernetes custom resource that will be +deployed and managed on your cluster. This plugin works by taking your spin application manifest and +scaffolding it into a Kubernetes yaml, which can be deployed and managed with `kubectl`. This allows +Kubernetes to manage and run Wasm workloads in a way similar to traditional container workloads. -
-
+

**Spin Operator** -The [Spin Operator](https://github.com/spinkube/spin-operator/) enables deploying Spin applications to Kubernetes. The foundation of this project is built using the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. Spin Operator defines Spin App Custom Resource Definitions (CRDs). Spin Operator watches SpinApp Custom Resources e.g. Spin app image, replicas, schedulers and other user-defined values and realizes the desired state in the Kubernetes cluster. Spin Operator introduces a host of functionality such as resource-based scaling, event-driven scaling, and much more. +The [Spin Operator](https://github.com/spinkube/spin-operator/) enables deploying Spin applications +to Kubernetes. The foundation of this project is built using the +[kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. Spin Operator defines Spin +App Custom Resource Definitions (CRDs). Spin Operator watches SpinApp Custom Resources e.g. Spin app +image, replicas, schedulers and other user-defined values and realizes the desired state in the +Kubernetes cluster. Spin Operator introduces a host of functionality such as resource-based scaling, +event-driven scaling, and much more. {{% /blocks/lead %}} diff --git a/content/en/blog/community/spinkube-kind-rd/index.md b/content/en/blog/community/spinkube-kind-rd/index.md index cd96bf76..d92c6462 100644 --- a/content/en/blog/community/spinkube-kind-rd/index.md +++ b/content/en/blog/community/spinkube-kind-rd/index.md @@ -10,7 +10,9 @@ resources: title: "Image #:counter" --- -The goal of this guide is show a way to bring [SpinKube](https://www.spinkube.dev/) to [KinD](https://kind.sigs.k8s.io/) without the need of a custom image, like the [SpinKube on k3d](https://www.spinkube.dev/docs/spin-operator/quickstart/) example. +The goal of this guide is show a way to bring [SpinKube](https://www.spinkube.dev/) to +[KinD](https://kind.sigs.k8s.io/) without the need of a custom image, like the [SpinKube on +k3d](https://www.spinkube.dev/docs/spin-operator/quickstart/) example. Instead, the Rancher Desktop (RD) Spin plugin will be used alongside KinD cluster configuration. @@ -34,15 +36,18 @@ In order to follow this guide, the following applications need to be installed: - Kubernetes is disabled - KinD v0.23 - This is the first version with the `nerdctl` provider - - If not yet available, you might need to build it (see [Bonus 1: build KinD](#bonus-1-build-kind)) + - If not yet available, you might need to build it (see [Bonus 1: build + KinD](#bonus-1-build-kind)) Concerning the Kubernetes tooling, Rancher Desktop already covers it. ### Connecting the Dots -The reason KinD v0.23 is needed with the `nerdctl` provider is because the Spin plugin only works on Rancher Desktop when `containerd` runtime is selected, instead of `docker`. +The reason KinD v0.23 is needed with the `nerdctl` provider is because the Spin plugin only works on +Rancher Desktop when `containerd` runtime is selected, instead of `docker`. -If it's still "obscure", keep reading and hopefully it will make sense (yes, not yet spoiling how the Spin plugin will be leveraged). +If it's still "obscure", keep reading and hopefully it will make sense (yes, not yet spoiling how +the Spin plugin will be leveraged). ## KinD Configurations @@ -50,10 +55,12 @@ This section should clarify how the Spin plugin will be leveraged. ### Containerd Configuration File -The first configuration is related to `containerd`, and more precisely, the one running inside the KinD container(s): +The first configuration is related to `containerd`, and more precisely, the one running inside the +KinD container(s): - Create a file in your `$HOME` directory called `config.toml` - - You can create it inside a directory, however it still should be located in your `$HOME` directory + - You can create it inside a directory, however it still should be located in your `$HOME` + directory - The location will be important when creating the KinD cluster - Paste the following content inside the `config.toml` file: @@ -100,20 +107,22 @@ version = 2 tolerate_missing_hugepages_controller = true # restrict_oom_score_adj needs to be true when running inside UserNS (rootless) restrict_oom_score_adj = false - + [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin] runtime_type = "/usr/local/bin/containerd-shim-spin-v2" ``` -> NOTE: this file is a copy of the original one that can be found inside the KinD container. The only addition to the file is the declaration of the `spin` plugin (the last 2 lines) +> NOTE: this file is a copy of the original one that can be found inside the KinD container. The +> only addition to the file is the declaration of the `spin` plugin (the last 2 lines) ### KinD Configuration File The second configuration file is related to KinD and will be used when creating a new cluster: - Create a file in your `$HOME` directory called `kind-spin.yaml` (for example) - - You can create it inside a directory, however it still should be located in your `$HOME` directory + - You can create it inside a directory, however it still should be located in your `$HOME` + directory - The location will be important when creating the KinD cluster **Windows Users ONLY** @@ -152,15 +161,22 @@ EOF Rancher Desktop leverages two different technologies depending on the OS its installed. -On Windows, [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) will be used and for Linux and MacOS, [Lima](https://lima-vm.io/docs/) is the preferred choice. +On Windows, [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) will be used and for Linux +and MacOS, [Lima](https://lima-vm.io/docs/) is the preferred choice. -While both technologies run Linux in a microVM, the behaviors differ in some parts. And the mountpoints with the host system are one of these differences. +While both technologies run Linux in a microVM, the behaviors differ in some parts. And the +mountpoints with the host system are one of these differences. -In the case of RD on WSL, the file generated is created **inside** the microVM, as nerdctl will need to have acceess to the file's path. Technically speaking, the mountpoint `/mnt/c` could also be used, however sometimes it's not available due to WSL main configuration. This way should be a bit more generic. +In the case of RD on WSL, the file generated is created **inside** the microVM, as nerdctl will need +to have acceess to the file's path. Technically speaking, the mountpoint `/mnt/c` could also be +used, however sometimes it's not available due to WSL main configuration. This way should be a bit +more generic. -Concerning RD on Lima, `$HOME` is mounted inside the microVM, therefore nerdctl will already see the files, and there's not need on copying the files over like it's done for WSL. +Concerning RD on Lima, `$HOME` is mounted inside the microVM, therefore nerdctl will already see the +files, and there's not need on copying the files over like it's done for WSL. -Finally, on both cases, the binary `containerd-shim-spin-v2` is already accessible inside the microVMs. +Finally, on both cases, the binary `containerd-shim-spin-v2` is already accessible inside the +microVMs. ## Create KinD Cluster @@ -190,15 +206,20 @@ kind create cluster --config=$HOME/kind-spin.yaml ![KinD create cluster on *nix](lima-kind-create-cluster.png) -Now that you have a KinD cluster running with the spin plugin enabled for `containerd`. However, it is not yet used by Kubernetes (`runtimeClass`). This will be done on the next section. +Now that you have a KinD cluster running with the spin plugin enabled for `containerd`. However, it +is not yet used by Kubernetes (`runtimeClass`). This will be done on the next section. ## Deploy SpinKube -From here, you can reference the [excellent quickstart to deploy SpinKube](https://www.spinkube.dev/docs/spin-operator/quickstart/) for a detailed explanation of each step. +From here, you can reference the [excellent quickstart to deploy +SpinKube](https://www.spinkube.dev/docs/spin-operator/quickstart/) for a detailed explanation of +each step. -To avoid repetition, and to encourage you to go read the quickstart (and the overall SpinKube docs), the steps below will only include short descriptions: +To avoid repetition, and to encourage you to go read the quickstart (and the overall SpinKube docs), +the steps below will only include short descriptions: -> **IMPORTANT:** the following commands are "universal", working on both powershell and bash/zsh. The "multiline characters" have been removed on purpose (\` for powershell and \\ for bash). +> **IMPORTANT:** the following commands are "universal", working on both powershell and bash/zsh. +> The "multiline characters" have been removed on purpose (\` for powershell and \\ for bash). ```shell # Install cert-manager @@ -243,13 +264,19 @@ Congratulations! You have a cluster with SpinKube running. ## Conclusion -First of all, THANK YOU to all the projects maintainers and contributors! Without you, there wouldn't be blogs like this one. +First of all, THANK YOU to all the projects maintainers and contributors! Without you, there +wouldn't be blogs like this one. -Secondly, as you may know or not, this is **highly experimental**, and the main purpose was more a proof-of-concept rather than a real solution. +Secondly, as you may know or not, this is **highly experimental**, and the main purpose was more a +proof-of-concept rather than a real solution. -Lastly, SpinKube on Rancher Desktop has been tested, both by Fermyon and SUSE, and it's suggested that you [follow this howto](https://www.spinkube.dev/docs/spin-operator/tutorials/integrating-with-rancher-desktop/) for a long-term environment. +Lastly, SpinKube on Rancher Desktop has been tested, both by Fermyon and SUSE, and it's suggested +that you [follow this +howto](https://www.spinkube.dev/docs/spin-operator/tutorials/integrating-with-rancher-desktop/) for +a long-term environment. -Special thanks to Fermyon for hosting this (first) blog on SpinKube and thanks to anyone reaching this last line, you mean the world to me. +Special thanks to Fermyon for hosting this (first) blog on SpinKube and thanks to anyone reaching +this last line, you mean the world to me. > \>>> The Corsair <<< diff --git a/content/en/blog/news/first-post/index.md b/content/en/blog/news/first-post/index.md index c35cfacd..a077a43d 100644 --- a/content/en/blog/news/first-post/index.md +++ b/content/en/blog/news/first-post/index.md @@ -9,34 +9,55 @@ resources: - src: "**.{png,jpg}" title: "Image #:counter" --- -Today we're introducing SpinKube - an open source platform for efficiently -running Spin-based WebAssembly (Wasm) applications on Kubernetes. Built with love by -folks from Microsoft, SUSE, LiquidReply, and Fermyon. +Today we're introducing SpinKube - an open source platform for efficiently running Spin-based +WebAssembly (Wasm) applications on Kubernetes. Built with love by folks from Microsoft, SUSE, +LiquidReply, and Fermyon. -SpinKube combines the application lifecycle management of the [Spin Operator][spin-operator], the efficiency of the [containerd-shim-spin][containerd-shim-spin], and the node lifecycle management of the forthcoming [runtime-class-manager][runtime-class-manager] (formerly [KWasm][kwasm]) to provide an excellent developer and operator experience alongside excellent density and scaling characteristics regardless of processor architecture or deployment environment. +SpinKube combines the application lifecycle management of the [Spin Operator][spin-operator], the +efficiency of the [containerd-shim-spin][containerd-shim-spin], and the node lifecycle management of +the forthcoming [runtime-class-manager][runtime-class-manager] (formerly [KWasm][kwasm]) to provide +an excellent developer and operator experience alongside excellent density and scaling +characteristics regardless of processor architecture or deployment environment. > -> TL;DR? Check out the [quickstart][quickstart] to learn how to deploy -> Wasm to Kubernetes today. -> +> TL;DR? Check out the [quickstart][quickstart] to learn how to deploy Wasm to Kubernetes today. +> ## Why Serverless? -Containers and Kubernetes revolutionized the field of software development and operations. A unified packaging and dependency management system that was flexible enough to allow most applications to become portable - to run the same versions locally as you did in production - was a desperately needed iteration from the Virtual Machines that came before. +Containers and Kubernetes revolutionized the field of software development and operations. A unified +packaging and dependency management system that was flexible enough to allow most applications to +become portable - to run the same versions locally as you did in production - was a desperately +needed iteration from the Virtual Machines that came before. -But while containers give us a relatively light weight abstraction layer, they come with a variety of issues: +But while containers give us a relatively light weight abstraction layer, they come with a variety +of issues: -- *Change management* - patching system dependencies (such as OpenSSL) need to happen on every image independently, which can become difficult to manage at large scale. -- *Rightsizing workloads* - managing access to shared resources like CPU and memory correctly is difficult, which often leads to over-provisioning resources (over introducing downtime) resulting in low utilization of host resources. -- *Size* - containers are often bigger than needed (impacting important metrics like startup time), and often include extraneous system dependencies by default which are not required to run the application. +- *Change management* - patching system dependencies (such as OpenSSL) need to happen on every image + independently, which can become difficult to manage at large scale. +- *Rightsizing workloads* - managing access to shared resources like CPU and memory correctly is + difficult, which often leads to over-provisioning resources (over introducing downtime) resulting + in low utilization of host resources. +- *Size* - containers are often bigger than needed (impacting important metrics like startup time), + and often include extraneous system dependencies by default which are not required to run the + application. The desire to fix many of these issues is what led to the "first" generation of serverless. > -> Borrowing ideas from CGI, serverless apps are not written as persistent servers. Instead serverless code responds to events (such as HTTP requests) - but does not run as a daemon process listening on a socket. The networking part of serverless architectures is delegated to the underlying infrastructure. -> +> Borrowing ideas from CGI, serverless apps are not written as persistent servers. Instead +> serverless code responds to events (such as HTTP requests) - but does not run as a daemon process +> listening on a socket. The networking part of serverless architectures is delegated to the +> underlying infrastructure. +> -First-generation serverless runtimes (such as AWS Lambda, Google Cloud Functions, Azure Functions, and their Kubernetes counterparts OpenWhisk and KNative) are all based on the principle of running a VM or container per function. While this afforded flexibility for application developers, complexity was introduced to platform engineers as neither compute type is designed to start quickly. Platform engineers operating these serverless platforms became responsible for an elaborate dance of pre-warming compute capacity and loading a workload just in time, making for a difficult tradeoff in cold start performance and cost. The result? A slow and inefficient first generation of Serverless. +First-generation serverless runtimes (such as AWS Lambda, Google Cloud Functions, Azure Functions, +and their Kubernetes counterparts OpenWhisk and KNative) are all based on the principle of running a +VM or container per function. While this afforded flexibility for application developers, complexity +was introduced to platform engineers as neither compute type is designed to start quickly. Platform +engineers operating these serverless platforms became responsible for an elaborate dance of +pre-warming compute capacity and loading a workload just in time, making for a difficult tradeoff in +cold start performance and cost. The result? A slow and inefficient first generation of Serverless. To illustrate this point, below are all the steps required to start a Kubernetes Pod: @@ -51,23 +72,36 @@ To illustrate this point, below are all the steps required to start a Kubernetes | Pod ready | The Pod responds to the asynchronous readiness probe and signals that it is now ready | | Service ready | The Pod gets added to the list of active endpoints for a Service and can start receiving traffic | -Optimizing this process and reducing the deplication that occurs on every -replica (even when running on the same node) is where [Spin][spin] the Wasm -runtime and SpinKube start to shine. +Optimizing this process and reducing the deplication that occurs on every replica (even when running +on the same node) is where [Spin][spin] the Wasm runtime and SpinKube start to shine. ## Why SpinKube? -SpinKube solves or mitigates many of the issues commonly associated with deploying, scaling, and operating serverless applications. +SpinKube solves or mitigates many of the issues commonly associated with deploying, scaling, and +operating serverless applications. -The foundational work that underpins the majority of this comes from removing the Container. Spin Applications are primarily distributed as OCI Artifacts, rather than traditional container images, meaning that you only ship your compiled application and its assets - without system dependencies. +The foundational work that underpins the majority of this comes from removing the Container. Spin +Applications are primarily distributed as OCI Artifacts, rather than traditional container images, +meaning that you only ship your compiled application and its assets - without system dependencies. -To execute those applications we have a [runwasi](https://github.com/containerd/runwasi) based [containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin/) that takes a Spin app, [pre-compiles it for the specific architecture](https://github.com/spinkube/containerd-shim-spin/pull/32) (caching the result in the containerd content store), and is then ready to service requests with [sub-millisecond startup times](https://fermyon.github.io/spin-benchmarks/criterion/reports/spin-executor_sleep-1ms/index.html) - no matter how long your application has been sitting idle. +To execute those applications we have a [runwasi](https://github.com/containerd/runwasi) based +[containerd-shim-spin](https://github.com/spinkube/containerd-shim-spin/) that takes a Spin app, +[pre-compiles it for the specific +architecture](https://github.com/spinkube/containerd-shim-spin/pull/32) (caching the result in the +containerd content store), and is then ready to service requests with [sub-millisecond startup +times](https://fermyon.github.io/spin-benchmarks/criterion/reports/spin-executor_sleep-1ms/index.html) +- no matter how long your application has been sitting idle. -This also has the benefit of moving the core security patching of web servers, queue listeners, and other application boundaries to the host, rather than image artifacts themselves. +This also has the benefit of moving the core security patching of web servers, queue listeners, and +other application boundaries to the host, rather than image artifacts themselves. -But what if you mostly rely on images provided by your cloud provider? Or deploy to environments that are not easily reproducible? +But what if you mostly rely on images provided by your cloud provider? Or deploy to environments +that are not easily reproducible? -This is where [runtime-class-manager][runtime-class-manager] (coming soon, formerly [KWasm][kwasm]) comes into play - a Production Ready and Kubernetes-native way to manage WebAssembly runtimes on Kubernetes Hosts. With runtime-class-manager you can ensure that containerd-shim-spin is installed on your hosts, manage version lifecycles, and security patches - all from the Kubernetes API. +This is where [runtime-class-manager][runtime-class-manager] (coming soon, formerly [KWasm][kwasm]) +comes into play - a Production Ready and Kubernetes-native way to manage WebAssembly runtimes on +Kubernetes Hosts. With runtime-class-manager you can ensure that containerd-shim-spin is installed +on your hosts, manage version lifecycles, and security patches - all from the Kubernetes API. By combining these technologies, scaling a Wasm application looks more like this: @@ -87,22 +121,23 @@ Or in practice: ## Deploying applications -This leaves one major area that can be painful when deploying applications to Kubernetes today: the developer and deployment experience. +This leaves one major area that can be painful when deploying applications to Kubernetes today: the +developer and deployment experience. -This is where [Spin][spin], [the `spin kube` plugin](https://github.com/spinkube/spin-plugin-kube), and [`spin-operator`][spin-operator] start to shine. +This is where [Spin][spin], [the `spin kube` plugin](https://github.com/spinkube/spin-plugin-kube), +and [`spin-operator`][spin-operator] start to shine. ![Architecture diagram of the SpinKube project](spinkube-diagram.png) -The spin-operator makes managing serverless applications easy - you provide the -image and bindings to secrets, and the operator handles realizing that -configuration as regular Kubernetes objects. +The spin-operator makes managing serverless applications easy - you provide the image and bindings +to secrets, and the operator handles realizing that configuration as regular Kubernetes objects. -Combining the operator with the `spin kube` plugin gives you superpowers, as it -streamlines the process of generating Kubernetes YAML for your application and -gives you a powerful starting point for applications. +Combining the operator with the `spin kube` plugin gives you superpowers, as it streamlines the +process of generating Kubernetes YAML for your application and gives you a powerful starting point +for applications. -This is all it takes to get a HTTP application running on Kubernetes (after -installing the containerd shim and the operator): +This is all it takes to get a HTTP application running on Kubernetes (after installing the +containerd shim and the operator): ```bash # Create a new Spin App @@ -123,7 +158,8 @@ spin kube scaffold -f $IMAGE_NAME > app.yaml kubectl apply -f app.yaml ``` -If you want to try things out yourself, the easiest way is to follow the [quickstart guide][quickstart] - we can’t wait to see what you build. +If you want to try things out yourself, the easiest way is to follow the [quickstart +guide][quickstart] - we can’t wait to see what you build. [spin-operator]: https://github.com/spinkube/spin-operator [containerd-shim-spin]: https://github.com/spinkube/containerd-shim-spin From 8bb77ecf2bc8f2f0d965cf8f5baddb8b34f31e2b Mon Sep 17 00:00:00 2001 From: Matthew Fisher Date: Mon, 19 Aug 2024 08:29:28 -0700 Subject: [PATCH 3/3] add rewrap to vscode suggestions Signed-off-by: Matthew Fisher --- .vscode/extensions.json | 5 +++++ .vscode/settings.json | 3 ++- 2 files changed, 7 insertions(+), 1 deletion(-) create mode 100644 .vscode/extensions.json diff --git a/.vscode/extensions.json b/.vscode/extensions.json new file mode 100644 index 00000000..912bb9bc --- /dev/null +++ b/.vscode/extensions.json @@ -0,0 +1,5 @@ +{ + "recommendations": [ + "stkb.rewrap" + ] +} diff --git a/.vscode/settings.json b/.vscode/settings.json index 65b3d459..db3ccae0 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -1,5 +1,6 @@ { + "rewrap.wrappingColumn": 100, "cSpell.words": [ "SpinKube" ] -} \ No newline at end of file +}