diff --git a/README.md b/README.md
index f38e66d6f..967629585 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
[](https://kubernetes.slack.com/messages/akri)
[](https://blog.rust-lang.org/2020/12/31/Rust-1.49.0.html)
-[](https://v1-16.docs.kubernetes.io/)
+[](https://kubernetes.io/)
[](https://codecov.io/gh/deislabs/akri)
[](https://github.com/deislabs/akri/actions?query=workflow%3A%22Check+Rust%22)
@@ -22,28 +22,29 @@ Simply put: you name it, Akri finds it, you use it.
## Why Akri
At the edge, there are a variety of sensors, controllers, and MCU class devices that are producing data and performing actions. For Kubernetes to be a viable edge computing solution, these heterogeneous “leaf devices” need to be easily utilized by Kubernetes clusters. However, many of these leaf devices are too small to run Kubernetes themselves. Akri is an open source project that exposes these leaf devices as resources in a Kubernetes cluster. It leverages and extends the Kubernetes [device plugin framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), which was created with the cloud in mind and focuses on advertising static resources such as GPUs and other system hardware. Akri took this framework and applied it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability.
-Akri is made for the edge, **handling the dynamic appearance and disappearance of leaf devices**. Akri provides an abstraction layer similar to [CNI](https://github.com/containernetworking/cni), but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the discovery protocol (say ONVIF) and the pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby **providing high availability** in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes.
+Akri is made for the edge, **handling the dynamic appearance and disappearance of leaf devices**. Akri provides an abstraction layer similar to [CNI](https://github.com/containernetworking/cni), but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the Discovery Handler (say ONVIF) that should be used to discover the devices and the Pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby **providing high availability** in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes.
-Most importantly, Akri **was built to be extensible**. We currently have ONVIF, udev, and OPC UA discovery handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge.
+Most importantly, Akri **was built to be extensible**. Akri currently supports ONVIF, udev, and OPC UA Discovery Handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge.
## How Akri Works
-Akri’s architecture is made up of four key components: two custom resources, a device plugin implementation, and a custom controller. The first custom resource, the Akri Configuration, is where **you name it**. This tells Akri what kind of device it should look for. At this point, **Akri finds it**! Akri's device plugin implementation looks for the device and tracks its availability using Akri's second custom resource, the Akri Instance. Having found your device, the Akri Controller helps **you use it**. It sees each Akri Instance (which represents a leaf device) and deploys a ("broker") pod that knows how to connect to the resource and utilize it.
+Akri’s architecture is made up of five key components: two custom resources, Discovery Handlers, an Agent (device plugin implementation), and a custom Controller. The first custom resource, the Akri Configuration, is where **you name it**. This tells Akri what kind of device it should look for. At this point, **Akri finds it**! Akri's Discovery Handlers look for the device and inform the Agent of discovered devices. The Agent then creates Akri's second custom resource, the Akri Instance, to track the availability and usage of the device. Having found your device, the Akri Controller helps **you use it**. It sees each Akri Instance (which represents a leaf device) and deploys a ("broker") Pod that knows how to connect to the resource and utilize it.
-
## Quick Start with a Demo
Try the [end to end demo](./docs/end-to-end-demo.md) of Akri to see Akri discover mock video cameras and a streaming app display the footage from those cameras. It includes instructions on K8s cluster setup. If you would like to perform the demo on a cluster of Raspberry Pi 4's, see the [Raspberry Pi 4 demo](./docs/end-to-end-demo-rpi4.md).
## Documentation
-- [Running Akri using our currently supported protocols](./docs/user-guide.md)
+- [User guide for deploying Akri using Helm](./docs/user-guide.md)
- [Akri architecture in depth](./docs/architecture.md)
- [How to build Akri](./docs/development.md)
-- [How to extend Akri for protocols that haven't been supported yet](./docs/extensibility.md).
-- Proposals for enhancements such as new protocol implementations can be found in the [proposals folder](./docs/proposals)
+- [How to extend Akri for protocols that haven't been supported yet](./docs/discovery-handler-development.md).
+- [How to create a broker to leverage discovered devices](./docs/broker-development.md).
+- Proposals for enhancements such as new Discovery Handler implementations can be found in the [proposals folder](./docs/proposals)
## Roadmap
-Akri was built to be extensible. We currently have ONVIF, udev, OPC UA discovery handlers, but as a community, we hope to continuously support more protocols. We have created a [discovery handler implementation roadmap](./docs/roadmap.md#implement-additional-discovery-handlers) in order to prioritize development of discovery handlers. If there is a protocol you feel we should prioritize, please [create an issue](https://github.com/deislabs/akri/issues/new/choose), or better yet, contribute the implementation! We are excited to work with you to build a more connected edge.
+Akri was built to be extensible. We currently have ONVIF, udev, OPC UA Discovery Handlers, but as a community, we hope to continuously support more protocols. We have created a [Discovery Handler implementation roadmap](./docs/roadmap.md#implement-additional-discovery-handlers) in order to prioritize development of Discovery Handlers. If there is a protocol you feel we should prioritize, please [create an issue](https://github.com/deislabs/akri/issues/new/choose), or better yet, contribute the implementation!
## Contributing
This project welcomes contributions, whether by [creating new issues](https://github.com/deislabs/akri/issues/new/choose) or pull requests. See our [contributing document](./docs/contributing.md) on how to get started.
diff --git a/docs/agent-in-depth.md b/docs/agent-in-depth.md
index cf83ef3ab..4da7db325 100644
--- a/docs/agent-in-depth.md
+++ b/docs/agent-in-depth.md
@@ -26,6 +26,27 @@ To enable resource sharing, the Akri Agent creates and updates the `Instance.dev
For more detailed information, see the [in-depth resource sharing doc](./resource-sharing-in-depth.md).
## Resource discovery
-The Agent discovers resources via Discovery Handlers (DHs). A Discovery Handler is anything that implements the `DiscoveryHandler` service defined in [`discovery.proto`](../discovery-utils/proto/discovery.proto). In order to be utilized, a DH must register with the Agent, which hosts the `Registration` service defined in [`discovery.proto`](../discovery-utils/proto/discovery.proto). The Agent maintains a list of registered DHs and their connectivity status, which is either `Waiting`, `Active`, or `Offline(Instant)`. When registered, a DH's status is `Waiting`. Once the Agent has successfully created a connecting with a DH, due a Configuration requesting resources discovered by that DH, it's status is set to `Active`. If the Agent is unable to connect or loses a connection with a DH, its status is set to `Offline(Instant)`. The `Instant` marks the time at which the DH became unresponsive. If the DH has been offline for more than 5 minutes, it is removed from the Agent's list of registered discovery handlers. If a Configuration is deleted, the Agent drops the connection it made with all DHs for that Configuration and marks the DHs' statuses as `Waiting`. Note, while probably not commonplace, the Agent allows for multiple DHs to be registered for the same protocol. IE: you could have two udev DHs running on a node on different sockets.
-
-Supported DHs each have a [library](../discovery-handlers) and a [binary implementation](../discovery-handler-modules). This allows them to either be run within the Agent binary or in their own Pod.
+The Agent discovers resources via Discovery Handlers (DHs). A Discovery Handler is anything that implements the
+`DiscoveryHandler` service defined in [`discovery.proto`](../discovery-utils/proto/discovery.proto). In order to be
+utilized, a DH must register with the Agent, which hosts the `Registration` service defined in
+[`discovery.proto`](../discovery-utils/proto/discovery.proto). The Agent maintains a list of registered DHs and their
+connectivity statuses, which is either `Waiting`, `Active`, or `Offline(Instant)`. When registered, a DH's status is
+`Waiting`. Once a Configuration requesting resources discovered by a DH is applied to the Akri-enabled cluster, the
+Agent will create a connection with the DH requested in the Configuration and set the status of the DH to `Active`. If
+the Agent is unable to connect or loses a connection with a DH, its status is set to `Offline(Instant)`. The `Instant`
+marks the time at which the DH became unresponsive. If the DH has been offline for more than 5 minutes, it is removed
+from the Agent's list of registered Discovery Handlers. If a Configuration is deleted, the Agent drops the connection it
+made with all DHs for that Configuration and marks the DHs' statuses as `Waiting`. Note, while probably not commonplace,
+the Agent allows for multiple DHs to be registered for the same protocol. IE: you could have two udev DHs running on a
+node on different sockets.
+
+The Agent's registration service defaults to running on the socket `/var/lib/akri/agent-registration.sock` but can be
+Configured with Helm. While Discovery Handlers must register with this service over UDS, the Discovery Handler's service
+can run over UDS or an IP based endpoint.
+
+Supported Rust DHs each have a [library](../discovery-handlers) and a [binary
+implementation](../discovery-handler-modules). This allows them to either be run within the Agent binary or in their own
+Pod.
+
+Reference the [Discovery Handler development document](./discovery-handler-development.md) to learn how to implement a
+Discovery Handler.
diff --git a/docs/architecture.md b/docs/architecture.md
index f370ff4e6..64ae5e1c1 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -64,7 +64,7 @@ For a more in-depth understanding, see [Controller In-depth](./controller-in-dep
# ...
capacity: 3
```
-1. The Akri Agent sees the Configuration and discovers a leaf device using the protocol specified in the Configuration. It creates a device plugin for that leaf device and registers it with the kubelet. The Agent then creates an Instance for the discovered leaf device, listing itself as a node that can access it under `nodes`. The Akri Agent puts all the information that the broker pods will need in order to connect to the specific device under the `brokerProperties` section of the Instance. Later, the controller will mount these as environment variables in the broker pods. Note how Instance has 3 available `deviceUsage` slots, since capacity was set to 3 and no brokers have been scheduled to the leaf device yet.
+1. The Akri Agent sees the Configuration and discovers a leaf device using the protocol specified in the Configuration. It creates a device plugin for that leaf device and registers it with the kubelet. When creating the device plugin, it tells the kubelet to set connection information for that specific device and additional metadata from a Configuration's `brokerProperties` as environment variables in all Pods that request this device's resource. This information is also set in the `brokerProperties` section of the Instance the Agent creates to represent the discovered leaf device. In the Instance, the Agent also lists itself as a node that can access the device under `nodes`. Note how Instance has 3 available `deviceUsage` slots, since capacity was set to 3 and no brokers have been scheduled to the leaf device yet.
```yaml
kind: Instance
metadata:
@@ -115,7 +115,7 @@ For a more in-depth understanding, see [Controller In-depth](./controller-in-dep
# ...
phase: Pending
```
-1. The kubelet on the selected node sees the scheduled pod and resource limit. It checks to see if the resource is available by calling `allocate` on the device plugin running in the Agent for the requested leaf device. When calling `allocate`, the kubelet requests a specific `deviceUsage` slot. Let's say the kubelet requested `akri---1`. The leaf device's device plugin checks to see that the requested `deviceUsage` slot has not been taken by another node. If it is available, it reserves that `deviceUsage` slot for this node (as shown below) and returns true.
+1. The kubelet on the selected node sees the scheduled pod and resource limit. It checks to see if the resource is available by calling `allocate` on the device plugin running in the Agent for the requested leaf device. When calling `allocate`, the kubelet requests a specific `deviceUsage` slot. Let's say the kubelet requested `akri---1`. The leaf device's device plugin checks to see that the requested `deviceUsage` slot has not been taken by another node. If it is available, it reserves that `deviceUsage` slot for this node (as shown below) and returns true. In the `allocate` response, the Agent also tells kubelet to mount the `Instance.brokerProperties` as environment variables in the broker Pod.
```yaml
kind: Instance
metadata:
diff --git a/docs/broker-development.md b/docs/broker-development.md
new file mode 100644
index 000000000..8b58d97bd
--- /dev/null
+++ b/docs/broker-development.md
@@ -0,0 +1,77 @@
+# Creating a Broker to Utilize Discovered Devices
+Akri's Agent discovers devices described by an Akri Configuration, and for each discovered device, it creates Kubernetes
+resources using the Device Plugin Framework, which can later be requested by Pods. Akri's Controller can automate the
+usage of discovered devices by deploying Pods that request the newly created resources. **Akri calls these Pods brokers.**
+
+> Background: Akri chose the term "broker" because one use case Akri initially envisioned was deploying Pods that acted
+> as protocol translation gateways. For example, Akri could discover USB cameras and automatically deploy a broker to
+> each camera that advertizes the camera as an IP camera that could be accessed outside the Node.
+
+Akri takes a micro-service approach to deploying brokers. A broker is deployed to each Node that can see a discovered
+device (limited by a `capacity` that can be set in a Configuration to limit the number of Nodes that can utilize a
+device at once). Each broker is provisioned with device connection information and other metadata as environment
+variables. These environment variables come from two sources: a Configuration's `brokerProperties` and the `properties`
+of a `Device` discovered by a Discovery Handler. The former is where an operator can specify environment variables that
+will be set in brokers that utilize any device discovered via the Configuration. The latter is specific to one device
+and usually contains connection information such as an RTSP URL for an ONVIF camera or a devnode for a USB device. Also,
+while `brokerProperties` can be unique to a scenario, the `properties` environment variable keys are consistent to a
+Discovery Handler with values changing based on device. All the environment variables from these two sources are
+displayed in an Instance that represents a discovered device, making it a good reference for what environment variables
+the broker should expect. The image below expresses how a broker Pod's environment variables come from the two
+aforementioned sources.
+
+
+
+## Discovery Handler specified environment variables
+The first step to developing a broker is understanding what information will be made available to the Pod via the
+Discovery Handler (aka the `Device.properties`). The following table contains the environment variables specified by
+each of Akri's currently supported Discovery Handlers, and the expected content of the environment variables.
+
+| Discovery Handler | Env Var Name | Value Type | Examples | Always Present? (Y/N) |
+|---|---|---|---|---|
+| debugEcho (for testing) | `DEBUG_ECHO_DESCRIPTION` | some random string | `foo`, `bar` | Y |
+| ONVIF | `ONVIF_DEVICE_SERVICE_URL` | ONVIF camera source URL | `http://10.123.456.789:1000/onvif/device_service` | Y |
+| ONVIF | `ONVIF_DEVICE_IP_ADDRESS` | IP address of the camera | `10.123.456.789` | Y |
+| ONVIF | `ONVIF_DEVICE_MAC_ADDRESS` | MAC address of the camera | `48:0f:cf:4e:1b:3d`, `480fcf4e1b3d`| Y |
+| OPC UA | `OPCUA_DISCOVERY_URL` | [DiscoveryURL](https://reference.opcfoundation.org/GDS/docs/4.3.3/) of specific OPC UA Server/Application | `10.123.456.789:1000/Some/Path/` | Y |
+| udev | `UDEV_DEVNODE` | device node for specific device | `/dev/video1`, `/dev/snd/pcmC1D0p`, `/dev/dri/card0` | Y |
+
+A broker should look up the variables set by the appropriate Discovery Handler and use the contents to connect to a
+specific device.
+
+## Exposing device information over a service
+Oftentimes, it is useful for a broker to expose some information from its device over a service. Akri, by default,
+assumes this behavior, creating a Kubernetes service for each broker (called an Instance level service) and for all
+brokers of a Configuration (called a Configuration level service). This allows an application to target a specific
+device/broker or all devices/brokers, the latter of which allows the application to be oblivious to the coming and going
+of devices (and thereby brokers).
+
+> Note: This default creation of Instance and Configuration services can be disabled by setting ` name>.configuration.createInstanceServices=false` and ` name>.configuration.createConfigurationService=false` when installing Akri's Helm chart.
+
+A broker can expose information via REST, gRPC, etc. Akri's [sample brokers](../samples/brokers) all use gRPC. For
+example, the udev video and ONVIF brokers both use the same [camera proto
+file](../samples/brokers/udev-video-broker/proto/camera.proto) for their gRPC interfaces, which contains a service that
+serves camera frames. This means that one end application can be deployed that implements the client side of the
+interface and grabs frames from all cameras, whether IP or USB based. This is exactly what our [sample streaming
+application](../samples/apps/video-streaming-app) does.
+
+## Deploying your custom broker
+Once you have created a broker, you can ask Akri to automatically deploy it to all all devices discovered by a
+Configuration by specifying the image in `.configuration.brokerPod.image.repository` and
+`.configuration.brokerPod.image.tag`. For example, say you created a broker that connects to a
+USB camera and advertises it as an IP camera. You want to deploy it to all USB cameras on your cluster's nodes using
+Akri, so you deploy Akri with a Configuration that uses the udev Discovery Handler and set the image of your broker (say
+`ghcr.io/brokers/camera-broker:v0.0.1`), like so:
+```sh
+helm repo add akri-helm-charts https://deislabs.github.io/akri/
+helm install akri akri-helm-charts/akri-dev \
+ --set udev.discovery.enabled=true \
+ --set udev.configuration.enabled=true \
+ --set udev.configuration.name=akri-udev-video \
+ --set udev.configuration.discoveryDetails.udevRules[0]='KERNEL=="video[0-9]*"' \
+ --set udev.configuration.brokerPod.image.repository="ghcr.io/brokers/camera-broker" \
+ --set udev.configuration.brokerPod.image.tag="v0.0.1"
+```
\ No newline at end of file
diff --git a/docs/discovery-handler-development.md b/docs/discovery-handler-development.md
new file mode 100644
index 000000000..cc2e81c07
--- /dev/null
+++ b/docs/discovery-handler-development.md
@@ -0,0 +1,313 @@
+# Implementing a new Discovery Handler
+Akri has [implemented discovery via several protocols](./roadmap.md#currently-supported-discovery-handlers) with sample
+brokers and applications to demonstrate usage. However, there may be protocols you would like to use to discover
+resources that have not been implemented as Discovery Handlers yet. To enable the discovery of resources via a new
+protocol, you will implement a Discovery Handler (DH), which does discovery on behalf of the Agent. A Discovery Handler
+is anything that implements the `DiscoveryHandler` service and `Registration` client defined in the [Akri's discovery gRPC
+proto file](../discovery-utils/proto/discovery.proto). These DHs run as their own Pods and are expected to register with
+the Agent, which hosts the `Registration` service defined in the gRPC interface.
+
+This document will walk you through the development steps to implement a Discovery Handler. If you would rather walk
+through an example, see Akri's [extensibility demo](./extensibility.md), which walks through creating a Discovery
+Handler that discovers HTTP based devices. This document will also cover the steps to get your Discovery Handler added
+to Akri, should you wish to [contribute it back](./contributing.md).
+
+A Discovery Handler can be written in any language using protobuf; however, Akri has provided a template for
+accelerating the development of Rust Discovery Handlers. This document will walk through both of those options. If using the
+Rust template, still read through the non-Rust section to gain context on the Discovery Handler interface.
+
+## Creating a Discovery Handler using Akri's Discovery Handler proto file
+This section covers how to use [Akri's discovery gRPC proto file](../discovery-utils/proto/discovery.proto) to create a Discovery Handler in the
+language of your choosing. It consists of three steps:
+1. Registering your Discovery Handler with the Akri Agent
+1. Specifying device filtering in a Configuration
+1. Implementing the `DiscoveryHandler` service
+
+### Registering with the Akri Agent
+Discovery Handlers and Agents run on each worker Node in a cluster. A Discovery Handler should register with the Agent
+running on its Node at the Agent's registration socket, which defaults to `/var/lib/akri/agent-registration.sock`. The
+directory can be changed when installing Akri by setting `agent.host.discoveryHandlers`. For example, to request that
+the Agent's `Registration` service live at `~/akri/sockets/agent-registration.sock` set
+`agent.host.discoveryHandlers=~/akri/sockets` when installing Akri. The Agent hosts the `Registration` service defined
+in [Akri's discovery interface](../discovery-utils/proto/discovery.proto) on this socket.
+
+When registering with the Agent, a Discovery Handler specifies its name (the one that will later be specified in
+Configurations), the endpoint of its Discovery Handler service, and whether the devices it discovers are shared
+(visible to multiple nodes).
+
+```proto
+message RegisterDiscoveryHandlerRequest {
+ // Name of the `DiscoveryHandler`. This name is specified in an
+ // Akri Configuration, to request devices discovered by this `DiscoveryHandler`.
+ string name = 1;
+ // Endpoint for the registering `DiscoveryHandler`
+ string endpoint = 2;
+ // Specifies the type of endpoint.
+ enum EndpointType {
+ UDS = 0;
+ NETWORK = 1;
+ }
+ EndpointType endpoint_type = 3;
+ // Specifies whether this device could be used by multiple nodes (e.g. an IP camera)
+ // or can only be ever be discovered by a single node (e.g. a local USB device)
+ bool shared = 4;
+}
+```
+
+Also note, that a Discovery Handler must also specify an `EndpointType` of either `UDS` or `Network` in the
+`RegisterDiscoveryHandlerRequest`. While Discovery Handlers must register with the Agent's `Registration` service over
+UDS, a `DiscoveryHandler` service can run over UDS or an IP based endpoint. However, the current convention is to use
+UDS for both registration and discovery.
+
+
+### Specifying device filtering in a Configuration
+Discovery Handlers are passed information about what subset of devices to discover from a Configuration's
+`discoveryDetails`. Akri's Configuration CRD takes in [`DiscoveryHandlerInfo`](../shared/src/akri/configuration.rs),
+which is defined structurally in Rust as follows:
+```rust
+#[derive(Serialize, Deserialize, Clone, Debug)]
+#[serde(rename_all = "camelCase")]
+pub struct DiscoveryHandlerInfo {
+ pub name: String,
+ #[serde(default)]
+ pub discovery_details: String,
+}
+```
+When creating a Discovery Handler, you must decide what name to give it and add any details you would like your
+Discovery Handler to receive in the `discovery_details` string. The Agent passes this string to Discovery Handlers as
+part of a `DiscoverRequest`. A Discovery Handler must then parse this string -- Akri's built in Discovery Handlers store
+an expected structure in it as serialized YAML -- to determine what to discover, filter out of discovery, and so on.
+
+For example, a Configuration that uses the ONVIF Discovery Handler, which allows filtering IP cameras by IP address, MAC
+address, and scopes, looks like the following.
+```yaml
+apiVersion: akri.sh/v0
+kind: Configuration
+metadata:
+name: http
+spec:
+discoveryHandler:
+ name: onvif
+ discoveryDetails: |+
+ ipAddresses:
+ action: Exclude
+ items:
+ - 10.0.0.1
+ - 10.0.0.2
+ macAddresses:
+ action: Exclude
+ items: []
+ scopes:
+ action: Include
+ items:
+ - onvif://www.onvif.org/name/GreatONVIFCamera
+ - onvif://www.onvif.org/name/AwesomeONVIFCamera
+ discoveryTimeoutSeconds: 2
+```
+The `discoveryHandler.name` must match `RegisterDiscoveryHandlerRequest.name` the Discovery Handler uses when
+registering with the Agent. Once you know what will be passed to your Discovery Handler, its time to implement the
+discovery functionality.
+
+### Implementing the `DiscoveryHandler` service
+The service should have all the functionality desired for discovering devices via your protocol and filtering for only
+the desired set. Each device a Discovery Handler discovers is represented by the `Device` type, as shown in a subset of
+the [discovery proto file](../discovery-utils/proto/discovery.proto) below. A Discovery Handler sets a unique `id` for
+the device, device connection information that needs to be set as environment variables in Pods that request the device
+in `properties`, and any mounts or devices that should be available to requesting Pods.
+
+```proto
+message DiscoverResponse {
+ // List of discovered devices
+ repeated Device devices = 1;
+}
+
+message Device {
+ // Identifier for this device
+ string id = 1;
+ // Properties that identify the device. These are stored in the device's instance
+ // and set as environment variables in the device's broker Pods. May be information
+ // about where to find the device such as an RTSP URL or a device node (e.g. `/dev/video1`)
+ map properties = 2;
+ // Optionally specify mounts for Pods that request this device as a resource
+ repeated Mount mounts = 3;
+ // Optionally specify device information to be mounted for Pods that request this device as a resource
+ repeated DeviceSpec device_specs = 4;
+}
+```
+
+Note, `discover` creates a streamed connection with the Agent, where the Agent gets the receiving end of the channel and
+the Discovery Handler sends device updates via the sending end of the channel. If the Agent drops its end, the Discovery
+Handler should stop discovery and attempt to re-register with the Agent. The Agent may drop its end due to an error or a
+deleted Configuration.
+
+## Creating a Discovery Handler in Rust using a template
+Rust Discovery Handler development can be kick-started using Akri's [Discovery Handler template](https://github.com/kate-goldenring/akri-discovery-handler-template) and
+[`cargo-generate`](https://github.com/cargo-generate/cargo-generate). Specify the name of your project.
+```sh
+cargo install cargo-generate
+cargo generate --git https://github.com/kate-goldenring/akri-discovery-handler-template.git --name akri-discovery-handler
+```
+This template abstracts away the work of registering with the Agent and creating the Discovery Handler service. All you
+need to do is specify the Discovery Handler name, whether discovered devices are sharable, implement discovery, and
+build the Discovery Handler.
+
+1. Specifying the Discovery Handler name and whether devices are sharable
+
+ Inside the newly created `akri-discovery-handler` project, navigate to `main.rs`. It contains all the logic to register our
+ `DiscoveryHandler` with the Akri Agent. We only need to specify the `DiscoveryHandler` name and whether the devices
+ discovered by our `DiscoveryHandler` can be shared. This is the name the Discovery Handler uses when registering
+ with the Agent. It is later specified in a Configuration to tell the Agent which Discovery Handler to use. For
+ example, in Akri's [udev Discovery Handler](../discovery-handler-modules/udev-discovery-handler/src/main.rs), `name`
+ is set to `udev` and `shared` to `false` as all devices are locally attached to nodes. The Discovery Handler name
+ also resolves to the name of the socket the template serves the Discovery Handler on.
+1. Implementing discovery
+
+ A `DiscoveryHandlerImpl` Struct has been created (in `discovery_handler.rs`) that minimally
+ implements the `DiscoveryHandler` service. Fill in the `discover` function, which returns the list of discovered `devices`.
+1. Build the Discovery Handler container
+
+ Build your Discovery Handler and push it to your container registry. To do so,
+ we simply need to run this step from the base folder of the Akri repo:
+ ```bash
+ HOST="ghcr.io"
+ USER=[[GITHUB-USER]]
+ DH="discovery-handler"
+ TAGS="v1"
+
+ DH_IMAGE="${HOST}/${USER}/${DH}"
+ DH_IMAGE_TAGGED="${DH_IMAGE}:${TAGS}"
+
+ docker build \
+ --tag=${DH_IMAGE_TAGGED} \
+ --file=./Dockerfile.discovery-handler \
+ . && \
+ docker push ${DH_IMAGE_TAGGED}
+ ```
+
+ Save the name of your image. We will pass it into our Akri installation command when we are ready to deploy our
+ Discovery Handler.
+
+## Deploy Akri with your custom Discovery Handler
+Now that you have created a Discovery Handler, deploy Akri and see how it discovers the devices and creates Akri
+Instances for each Device.
+
+> Optional: If you've previous installed Akri and wish to reset, you may:
+>
+> ```bash
+> # Delete Akri Helm
+> sudo helm delete akri
+> ```
+
+Akri has provided Helm templates for custom Discovery Handlers and their Configurations. These templates are provided as
+a starting point. They may need to be modified to meet the needs of a Discovery Handler. When installing Akri, specify
+that you want to deploy a custom Discovery Handler as a DaemonSet by setting `custom.discovery.enabled=true`. Specify
+the container for that DaemonSet as the Discovery Handler that you built
+[above](###build-the-discoveryhandler-container) by setting `custom.discovery.image.repository=$DH_IMAGE` and
+`custom.discovery.image.repository=$TAGS`. To automatically deploy a custom Configuration, set
+`custom.configuration.enabled=true`. Customize the Configuration's `discovery_details` string to contain any filtering
+information: `custom.configuration.discoveryDetails=`.
+
+Also set the name the Discovery Handler will register under (`custom.configuration.discoveryHandlerName`) and a name for
+the Discovery Handler and Configuration (`custom.discovery.name` and `custom.configuration.name`). All these settings
+come together as the following Akri installation command:
+> Note: Be sure to consult the [user guide](./user-guide.md) to see whether your Kubernetes distribution needs any
+> additional configuration.
+```bash
+ helm repo add akri-helm-charts https://deislabs.github.io/akri/
+ helm install akri akri-helm-charts/akri-dev \
+ --set imagePullSecrets[0].name="crPullSecret" \
+ --set custom.discovery.enabled=true \
+ --set custom.discovery.image.repository=$DH_IMAGE \
+ --set custom.discovery.image.tag=$TAGS \
+ --set custom.discovery.name=akri--discovery \
+ --set custom.configuration.enabled=true \
+ --set custom.configuration.name=akri- \
+ --set custom.configuration.discoveryHandlerName= \
+ --set custom.configuration.discoveryDetails=
+ ```
+
+> Note: if your Discovery Handler's `discoveryDetails` cannot be easily set using Helm, generate a Configuration file
+> and modify it as needed. configuration.enabled`.)
+> ```bash
+> helm install akri akri-helm-charts/akri-dev \
+> --set imagePullSecrets[0].name="crPullSecret" \
+> --set custom.discovery.enabled=true \
+> --set custom.discovery.image.repository=$DH_IMAGE \
+> --set custom.discovery.image.tag=$TAGS \
+> --set custom.discovery.name=akri--discovery \
+> --set custom.configuration.enabled=true \
+> --set custom.configuration.name=akri- \
+> --set custom.configuration.discoveryHandlerName= \
+> --set custom.configuration.discoveryDetails=to-modify \
+> --set rbac.enabled=false \
+> --set controller.enabled=false \
+> --set agent.enabled=false > configuration.yaml
+> ```
+> After modifying the file, apply it to the cluster using standard kubectl:
+> ```bash
+> kubectl apply -f configuration.yaml
+> ```
+
+Watch as the Agent, Controller, and Discovery Handler Pods are spun up and as Instances are created for each of the
+discovery devices.
+```bash
+watch kubectl get pods,akrii
+```
+
+Inspect the Instances' `brokerProperties`. They will be set as environment
+variables in Pods that request the Instance's/device's resource.
+```bash
+kubectl get akrii -o wide
+```
+
+If you simply wanted Akri to expose discovered devices to the cluster as Kubernetes resources, you could stop here. If
+you have a workload that could utilize one of these resources, you could [manually deploy pods that request them as
+resources](./requesting-akri-resources.md). Alternatively, you could have Akri automatically deploy workloads to
+discovered devices. We call these workloads brokers. To quickly see this, deploy empty nginx pods to discovered
+resources, by updating our Configuration to include a broker PodSpec.
+```bash
+ helm upgrade akri akri-helm-charts/akri-dev \
+ --set imagePullSecrets[0].name="crPullSecret" \
+ --set custom.discovery.enabled=true \
+ --set custom.discovery.image.repository=$DH_IMAGE \
+ --set custom.discovery.image.tag=$TAGS \
+ --set custom.discovery.name=akri--discovery \
+ --set custom.configuration.enabled=true \
+ --set custom.configuration.name=akri- \
+ --set custom.configuration.discoveryHandlerName= \
+ --set custom.configuration.discoveryDetails= \
+ --set custom.brokerPod.image.repository=nginx
+ watch kubectl get pods,akrii
+```
+The empty nginx brokers do not do anything with the devices they've requested. Exec into the Pods to confirm that the
+`Device.properties` (Instance's `brokerProperties`) were set as environment variables.
+
+```sh
+sudo kubectl exec -i -- /bin/bash -c "printenv"
+```
+
+## Create a broker
+Now that you can discover new devices, see our [documentation on creating brokers](./broker-development.md) to utilize
+discovered devices.
+
+## Contributing your Discovery Handler back to Akri
+Now that you have a working Discovery Handler and broker, we'd love for you to contribute your code to Akri. The
+following steps will need to be completed to do so:
+1. Create an Issue with a feature request for this Discovery Handler.
+2. Create a proposal and put in PR for it to be added to the [proposals folder](./proposals).
+3. Implement your Discovery Handler and a document named `/akri/docs/-configuration.md` on how to create a
+ Configuration that uses your Discovery Handler.
+4. Create a pull request, that includes Discovery Handler and Dockerfile in the [Discovery Handler
+ modules](../discovery-handler-modules) and [build](../build/containers) directories, respectively.
+ Be sure to also update the minor version of Akri. See [contributing](./contributing.md#versioning) to learn more
+ about our versioning strategy.
+
+For a Discovery Handler to be considered fully implemented the following must be included in the PR.
+1. A new [`DiscoveryHandler`](../discovery-utils/proto/discovery.proto) implementation
+1. A [sample broker](./broker-development.md) for the new resource
+1. A sample Configuration that uses the new protocol in the form of a Helm template and values.
+1. (Optional) A sample end application that utilizes the services exposed by the Configuration
+1. Dockerfile[s] for broker [and sample app] and associated update to the [makefile](../build/akri-containers.mk)
+1. Github workflow[s] for broker [and sample app] to build containers and push to Akri container repository
+1. Documentation on how to use the new sample Configuration, like the [udev Configuration
+ document](./udev-configuration.md)
diff --git a/docs/end-to-end-demo.md b/docs/end-to-end-demo.md
index 07defdba4..16c94c77d 100644
--- a/docs/end-to-end-demo.md
+++ b/docs/end-to-end-demo.md
@@ -69,82 +69,9 @@ The following will be covered in this demo:
> ```
## Setting up a cluster
+Reference our [cluster setup documentation](./setting-up-cluster.md) to set up a cluster or adapt your currently existing cluster.
-**Note:** Feel free to deploy on any Kubernetes distribution. Here, find instructions for K3s and MicroK8s. Select and
-carry out one or the other (or adapt to your distribution), then continue on with the rest of the steps.
-
-### Option 1: Set up single node cluster using K3s
-1. Install [K3s](https://k3s.io/) v1.18.9+k3s1.
- ```sh
- curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.18.9+k3s1 sh -
- ```
-1. Grant admin privilege to access kubeconfig.
- ```sh
- sudo addgroup k3s-admin
- sudo adduser $USER k3s-admin
- sudo usermod -a -G k3s-admin $USER
- sudo chgrp k3s-admin /etc/rancher/k3s/k3s.yaml
- sudo chmod g+r /etc/rancher/k3s/k3s.yaml
- su - $USER
- ```
-1. Check K3s status.
- ```sh
- kubectl get node
- ```
-1. Install Helm.
- ```sh
- export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
- sudo apt install -y curl
- curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
- ```
-1. K3s uses its own embedded crictl, so we need to configure the Akri Helm chart with the k3s crictl path and socket.
- ```sh
- export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/run/k3s/containerd/containerd.sock"
- ```
-
-### Option 2: Set up single node cluster using MicroK8s
-1. Install [MicroK8s](https://microk8s.io/docs).
- ```sh
- sudo snap install microk8s --classic --channel=1.18/stable
- ```
-1. Grant admin privilege for running MicroK8s commands.
- ```sh
- sudo usermod -a -G microk8s $USER
- sudo chown -f -R $USER ~/.kube
- su - $USER
- ```
-1. Check MicroK8s status.
- ```sh
- microk8s status --wait-ready
- ```
-1. Enable CoreDNS, Helm and RBAC for MicroK8s.
- ```sh
- microk8s enable dns helm3 rbac
- ```
-1. If you don't have an existing `kubectl` and `helm` installations, add aliases. If you do not want to set an alias, add `microk8s` in front of all `kubectl` and `helm` commands.
- ```sh
- alias kubectl='microk8s kubectl'
- alias helm='microk8s helm3'
- ```
-1. For the sake of this demo, the udev video broker pods run privileged to easily grant them access to video devices, so
- enable privileged pods and restart MicroK8s. More explicit device access could have been configured by setting the
- appropriate [security context](udev-configuration.md#setting-the-broker-pod-security-context) in the broker PodSpec
- in the Configuration.
- ```sh
- echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kube-apiserver
- microk8s.stop
- microk8s.start
- ```
-1. Akri depends on crictl to track some Pod information. MicroK8s does not install crictl locally, so crictl must be installed and the Akri Helm chart needs to be configured with the crictl path and MicroK8s containerd socket.
- ```sh
- # Note that we aren't aware of any version restrictions
- VERSION="v1.17.0"
- curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
- sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
- rm -f crictl-$VERSION-linux-amd64.tar.gz
-
- export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/var/snap/microk8s/common/run/containerd.sock"
- ```
+> Note, if using MicroK8s, enable privileged Pods, as the udev video broker pods run privileged to easily grant them access to video devices. More explicit device access could have been configured by setting the appropriate [security context](udev-configuration.md#setting-the-broker-pod-security-context) in the broker PodSpec in the Configuration.
## Installing Akri
You tell Akri what you want to find with an Akri Configuration, which is one of Akri's Kubernetes custom resources. The Akri Configuration is simply a `yaml` file that you apply to your cluster. Within it, you specify three things:
diff --git a/docs/extensibility.md b/docs/extensibility.md
index fe67738ce..b5a8682a5 100644
--- a/docs/extensibility.md
+++ b/docs/extensibility.md
@@ -1,25 +1,12 @@
-# Extensibility
+# Extensibility Example
+This document will walk through an end-to-end example of creating Discovery Handler to discover **HTTP-based devices**
+that publish random sensor data. It will also walk through how to create a custom broker to leverage the discovered
+devices. Reference the [Discovery Handler development](./discovery-handler-development.md) and [broker Pod
+development](./broker-development.md) documents if you prefer generic documentation over an example.
-Akri has [implemented several discovery protocols](./roadmap.md#currently-supported-protocols) with sample brokers and
-applications. However, there may be protocols you would like to use to discover resources that have not been implemented
-yet. To enable the discovery of resources via a new protocol, you will implement a Discovery Handler (DH), which does
-discovery on behalf of the Agent. A Discovery Handler is anything that implements the `Discovery` service and
-`Registration` client defined in the [Akri's discovery gRPC proto file](../discovery-utils/proto/discovery.proto). These
-DHs run as their own Pods and are expected to register with the Agent, which hosts the `Registration` service defined in
-the gRPC interface. A discovery handler can be written in any language using protobuf; however, Akri has provided a
-template for accelerating creating a discovery handler in Rust.
-
-This document will walk you through the development steps to implement a Discovery Handler and sample broker that
-utilizes exposed devices. This document will also cover the steps to get your Discovery Handler and broker added to
-Akri, should you wish to [contribute them back](./contributing.md).
-
-Before continuing, please read the [Akri architecture](./architecture.md), [Akri agent](./agent-in-depth.md), and
-[development](./development.md) documentation pages. They will provide a good understanding of Akri, how it works, what
-components it is composed of, and how to build it.
-> **Note:** a Discovery Handler can use any set of steps to discover devices. It does not have to be a "protocol" in the
-> traditional sense. For example, Akri defines udev (not often called a "protocol") and OPC UA as protocols.
-
-Here, we will create a Discovery Handler to discover **HTTP-based devices** that publish random sensor data.
+Before continuing, you may wish to reference the [Akri architecture](./architecture.md) and [Akri
+agent](./agent-in-depth.md) documentation. They will provide a good understanding of Akri, how it works, and what
+components it is composed of.
Any Docker-compatible container registry will work for hosting the containers being used in this example (Docker Hub,
Github Container Registry, Azure Container Registry, etc). Here, we are using the [GitHub Container
@@ -31,6 +18,15 @@ yourself](https://docs.github.com/en/free-pro-team@latest/packages/getting-start
> docker-registry crPullSecret --docker-server= --docker-username= --docker-password=`) and
> access it with an `imagePullSecret`. Here, we will assume the secret is named `crPullSecret`.
+## Background on Discovery Handlers
+Akri has [implemented discovery via several protocols](./roadmap.md#currently-supported-discovery-handlers) with sample
+brokers and applications to demonstrate usage. However, there may be protocols you would like to use to discover
+resources that have not been implemented as Discovery Handlers yet. To enable the discovery of resources via a new
+protocol, you will implement a Discovery Handler (DH), which does discovery on behalf of the Agent. A Discovery Handler
+is anything that implements the `Discovery` service and `Registration` client defined in the [Akri's discovery gRPC
+proto file](../discovery-utils/proto/discovery.proto). These DHs run as their own Pods and are expected to register with
+the Agent, which hosts the `Registration` service defined in the gRPC interface.
+
## New DiscoveryHandler implementation
### Use `cargo generate` to clone the Discovery Handler template
Pull down the [Discovery Handler template](https://github.com/kate-goldenring/akri-discovery-handler-template) using
@@ -461,7 +457,10 @@ installation command:
```
Watch as the Agent, Controller, and Discovery Handler Pods are spun up and as Instances are created for each of the
-discovery devices. `watch kubectl get pods,akrii`
+discovery devices.
+```bash
+watch kubectl get pods,akrii
+```
If you simply wanted Akri to expose discovered devices to the cluster as Kubernetes resources, you could stop here. If
you have a workload that could utilize one of these resources, you could [manually deploy pods that request them as
@@ -664,8 +663,8 @@ used in our installation command.
--set custom.configuration.name=akri-http \
--set custom.configuration.discoveryHandlerName=http \
--set custom.configuration.discoveryDetails=http://discovery:9999/discovery \
- --set custom.brokerPod.image.repository=$BROKER_IMAGE \
- --set custom.brokerPod.image.tag=$TAGS
+ --set custom.configuration.brokerPod.image.repository=$BROKER_IMAGE \
+ --set custom.configuration.brokerPod.image.tag=$TAGS
watch kubectl get pods,akrii
```
> Note: substitute `helm upgrade` for `helm install` if you do not have an existing Akri installation
@@ -673,26 +672,4 @@ used in our installation command.
We can watch as the broker pods get deployed:
```bash
watch kubectl get pods -o wide
-```
-
-## Contributing your Protocol Implementation back to Akri
-Now that you have a working protocol implementation and broker, we'd love for you to contribute your code to Akri. The
-following steps will need to be completed to do so:
-1. Create an Issue with a feature request for this protocol.
-2. Create a proposal and put in PR for it to be added to the [proposals folder](./proposals).
-3. Implement your protocol and provide a full end to end sample.
-4. Create a pull request, that includes discovery handler and Dockerfile in the [discovery handler
- modules](../discovery-handler-modules) and [build](../build/containers/discovery-handlers) directories, respectively.
- Be sure to also update the minor version of Akri. See [contributing](./contributing.md#versioning) to learn more
- about our versioning strategy.
-
-For a protocol to be considered fully implemented the following must be included in the PR. Note that the HTTP protocol
-above has not completed all of the requirements.
-1. A new DiscoveryHandler implementation
-1. A sample protocol broker for the new resource
-1. A sample Configuration that uses the new protocol in the form of a Helm template and values
-1. (Optional) A sample end application that utilizes the services exposed by the Configuration
-1. Dockerfile[s] for broker [and sample app] and associated update to the [makefile](../build/akri-containers.mk)
-1. Github workflow[s] for broker [and sample app] to build containers and push to Akri container repository
-1. Documentation on how to use the new sample Configuration, like the [udev Configuration
- document](./udev-configuration.md)
+```
\ No newline at end of file
diff --git a/docs/media/akri-architecture.svg b/docs/media/akri-architecture.svg
index 39ed561cd..6b88a9e42 100644
--- a/docs/media/akri-architecture.svg
+++ b/docs/media/akri-architecture.svg
@@ -1,670 +1,406 @@
-
+
+
-
- Sheet.1047
- kind: Configuration metadata: ..name: akri-<protocolA> spec: ...
-
-
-
- kind: Configurationmetadata:..name: akri-<protocolA>spec:..discoveryHandler:…..name: <protocolA>..brokerPodSpec:…..containers:…..- name: custom-broker……..image: "ghcr.io/…"
-
- Sheet.1048
+
+ Sheet.1020
+ kind: Configuration metadata: ..name: akri-<protocol> spec: ....
+
+ kind: Configuration metadata: ..name: akri-<protocol> spec: ..discoveryHandler: …..name: <protocol> ..brokerPodSpec: …..containers: …..- name: custom-broker ……..image: "ghcr.io/…"
+
+ Sheet.1021
+ 438.07 L4.71 426.65 L0 426.65 Z" class="st16"/>
-
-
-
-
+ Can.1091etcd
-
- Sheet.1050
-
-
-
-
-
-
-
+
+ Sheet.1023
+
-
-
-
-
- etcd
+
+
+ etcd
-
-
-
-
+ 1-D single.1004
-
- Sheet.1056
+
+ Sheet.1025
+ Z" class="st24"/>
+ class="st25"/>
-
- Sheet.1057
-
+
+ Sheet.1026
+
-
- Sheet.1058
-
+
+ Sheet.1027
+
-
- Wavy Box.1020
- Broker
-
-
-
- Broker
-
- Wavy Box.1019
- Broker
-
-
-
- Broker
-
- Wavy Box.1003
- custom-broker
-
-
-
- custom-broker
-
- Sheet.1063
+
+ Sheet.1031
+
+ Rectangle.1066
+ Configuration CRD
+
+ Configuration CRD
+
+ Rectangle.1067
+ Instance CRD
+
+ Instance CRD
+
+ Rectangle.1068
+ <protocol> Configuration
+
+ <protocol> Configuration
+
+ Rectangle.1069
+ <protocol> Instance
+
+ <protocol> Instance
+
+
+ Sheet.1036Leaf Device
-
-
- Leaf Device
-
- Sheet.1064
+ Leaf Device
+
+ Sheet.1037Leaf Device
-
-
- Leaf Device
-
- Sheet.1065
+ Leaf Device
+
+ Sheet.1038Leaf Device
-
-
- Leaf Device
-
+ Leaf Device
+ Sheet.2
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
-
-
+
+
+
-
+ Sheet.3
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
-
-
+
+
+
-
+ Sheet.4
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
-
-
+
+
+
-
- Sheet.1066
-
- Rectangle.1066
- Configuration CRD
-
-
-
-
-
-
- ConfigurationCRD
-
- Rectangle.1067
- Instance CRD
-
-
-
-
-
-
- Instance CRD
-
- Rectangle.1068
- <protocolA> Configuration
-
-
-
-
-
-
- <protocolA> Configuration
-
- Rectangle.1069
- <protocolA> Instance
-
-
-
-
-
-
- <protocolA> Instance
+
+ Sheet.1039
+
+ Wavy Box.1020
+ Broker
+
+ Broker
+
+ Wavy Box.1019
+ Broker
+
+ Broker
+
+ Wavy Box.1003
+ custom-broker
+
+ custom-broker
+
+
+ Sheet.1040
+ <protocol> Discovery Handler
+
+ <protocol> Discovery Handler
+
+ Sheet.1041
+
diff --git a/docs/media/setting-broker-environment-variables.svg b/docs/media/setting-broker-environment-variables.svg
new file mode 100644
index 000000000..b8a52ac9f
--- /dev/null
+++ b/docs/media/setting-broker-environment-variables.svg
@@ -0,0 +1,239 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Page-1
+
+
+
+ Rectangle.2011
+ NodeA
+
+
+
+
+
+
+ NodeA
+
+ Sheet.2001
+ kind: Configuration metadata: …name: akri-udev-video spec: …d...
+
+
+
+ kind: Configurationmetadata:…name: akri-udev-videospec:…discoveryHandler: …..name: udev…..discoveryDetails:|+……..udevRules:……..- 'KERNEL=="video[0-9]*"'…capacity: 3…brokerPodSpec:…..containers:…..- name: camera-broker……..image: "ghcr.io/…"…brokerProperties:…..FRAMES_PER_SECOND: "10"…..RESOLUTION_WIDTH: “640"…..RESOLUTION_HEIGHT: “480”
+
+ Sheet.2002
+ kind: Instance metadata: …name: akri-udev-video-ffffff spec: ...
+
+
+
+ kind: Instancemetadata:…name: akri-udev-video-ffffffspec:…configurationName: akri-udev-…video…shared: false…nodes:…- "NodeA"…deviceUsage:…..akri-udev-video-ffffff-0: "NodeA"…..akri-udev-video-ffffff-1: ""…..akri-udev-video-ffffff-2: ""…brokerProperties:…..UDEV_DEVNODE: /dev/video0…..FRAMES_PER_SECOND: "12"…..RESOLUTION_HEIGHT: "485"…..RESOLUTION_WIDTH: "645"
+
+ Sheet.1
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Dynamic connector.2004
+ USB
+
+
+
+
+ USB
+
+ Square.2005
+ Udev Discovery Handler
+
+
+
+
+
+
+ Udev Discovery Handler
+
+ Dynamic connector.2007
+
+
+
+ Square.2008
+ camera-broker
+
+
+
+
+
+
+ camera-broker
+
+ Dynamic connector.2009
+
+
+
+ Dynamic connector.2006
+
+
+
+ Dynamic connector.2010
+
+
+
+
diff --git a/docs/opcua-demo.md b/docs/opcua-demo.md
index 9e61e6bec..a9f5a1948 100644
--- a/docs/opcua-demo.md
+++ b/docs/opcua-demo.md
@@ -46,7 +46,7 @@ specifications](https://reference.opcfoundation.org/v104/).
## Setting up a single-node cluster
Before running Akri, we need a Kubernetes cluster. If you do not have a readily available cluster, follow the steps
-provided in the [end-to-end demo](./end-to-end-demo.md#set-up-cluster) to set up a single-node MicroK8s or K3s cluster. If using MicroK8s, you can skip the step of enabling privileged pods, as the OPC UA monitoring brokers do not need to run in a privileged security context.
+provided in the [cluster setup documentation](./setting-up-cluster.md).
## Creating X.509 v3 Certificates
**If security is not desired, this section can be skipped, as each monitoring broker will use an OPC UA Security Policy
@@ -109,14 +109,15 @@ to the OPC Foundation's .NET Console Reference Server.
1. Open the UA Reference solution file and navigate to NetCoreReferenceServer project.
-1. Open `Quickstarts.Reference.Config.xml`. This application configuration file is where many features can be configured,
- such as the application description (application name, uri, etc), security configuration, and base address. Only the
- latter needs to be modified if using no security. On lines 76 and 77, modify the address of the server, by replacing
- `localhost` with the IP address of the machine the server is running on. If left as `localhost` the application
- will automatically replace it with the hostname of the machine which will be unreachable to the broker pod. On the
- same lines, modify the ports if they are already taken. Akri will preference using the tcp endpoint, since according
- to the [OPC UA Security Specification](https://reference.opcfoundation.org/v104/Core/docs/Part2/4.10/), secure
- channels over HTTPS do not provide application authentication.
+1. Open `Quickstarts.Reference.Config.xml`. This application configuration file is where many features can be
+ configured, such as the application description (application name, uri, etc), security configuration, and base
+ address. Only the latter needs to be modified if using no security. On lines 76 and 77, modify the address of the
+ server, by replacing `localhost` with the IP address of the machine the server is running on. If left as `localhost`
+ the application will automatically replace it with the hostname of the machine which will be unreachable to the
+ broker pod. On the same lines, modify the ports if they are already taken. Akri will preference using the tcp
+ endpoint, since according to the [OPC UA Security
+ Specification](https://reference.opcfoundation.org/v104/Core/docs/Part2/4.10/), secure channels over HTTPS do not
+ provide application authentication.
1. (Optional) If using security, and you have already created certificates in the previous section, now you can modify
the security configuration inside `Quickstarts.Reference.Config.xml` to point to those certificates. After using the
@@ -142,8 +143,9 @@ to the OPC Foundation's .NET Console Reference Server.
its variables) 2. We care about the `NamespaceIndex` because it along with `Identifier`, are the two fields to a
`NodeId`. If you inspect the `CreateDynamicVariable` function, you will see that it creates an OPC UA variable,
using the `path` parameter ("Thermometer_Temperature") as the `Identifier` when creating the NodeID for that
- variable. It then adds the variable to the `m_dynamicNodes` list. At the bottom of `CreateAddressSpace` the following
- line initializes a simulation that will periodically change the value of all the variables in `m_dynamicNodes`:
+ variable. It then adds the variable to the `m_dynamicNodes` list. At the bottom of `CreateAddressSpace` the
+ following line initializes a simulation that will periodically change the value of all the variables in
+ `m_dynamicNodes`:
``` c#
m_simulationTimer = new Timer(DoSimulation, null, 1000, 1000);
```
@@ -166,32 +168,33 @@ to the OPC Foundation's .NET Console Reference Server.
## Running Akri
1. Make sure your OPC UA Servers are running
-1. Now it is time to install the Akri using Helm. We can specify that when installing Akri, we also want to create an
- OPC UA Configuration by setting the helm value `--set opcua.enabled=true`. In the Configuration as environment
- variables in the broker PodSpec, we will specify the `Identifier` and `NamespaceIndex` of the NodeID we want the
- brokers to monitor. These values are mounted as environment variables in the brokers. In our case that is our
- temperature variable we made earlier, which has an `Identifier` of `Thermometer_Temperature` and `NamespaceIndex` of
- `2`. Finally, since we did not set up a Local Discovery Server -- see [Setting up and using a Local Discovery
+1. Now it is time to install the Akri using Helm. When installing Akri, we can specify that we want to deploy the OPC UA
+ Discovery Handlers by setting the helm value `opcua.discovery.enabled=true`. We also specify that we want to create
+ an OPC UA Configuration with `--set opcua.configuration.enabled=true`. In the Configuration, any values that should
+ be set as environment variables in brokers can be set in `opcua.configuration.brokerProperties`. In this scenario, we
+ will specify the `Identifier` and `NamespaceIndex` of the NodeID we want the brokers to monitor. In our case that is
+ our temperature variable we made earlier, which has an `Identifier` of `Thermometer_Temperature` and `NamespaceIndex`
+ of `2`. Finally, since we did not set up a Local Discovery Server -- see [Setting up and using a Local Discovery
Server](#setting-up-and-using-a-local-discovery-server-(windows-only)) in the Extensions section at the bottom of
this document to use a LDS -- we must specify the DiscoveryURLs of the OPC UA Servers we want Agent to discover.
Those are the tcp addresses that we modified in step 3 of [Creating OPC UA Servers](#creating-opc-ua-servers). Be
sure to set the appropriate IP address and port number for the DiscoveryURLs in the Helm command below. If using
- security, uncomment `--set opcua.mountCertificates='true'`.
+ security, uncomment `--set opcua.configuration.mountCertificates='true'`.
```sh
helm repo add akri-helm-charts https://deislabs.github.io/akri/
helm install akri akri-helm-charts/akri-dev \
- --set opcua.enabled=true \
- --set opcua.name=akri-opcua-monitoring \
- --set opcua.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
- --set opcua.brokerPod.env.IDENTIFIER='Thermometer_Temperature' \
- --set opcua.brokerPod.env.NAMESPACE_INDEX='2' \
- --set opcua.discoveryUrls[0]="opc.tcp://:/Quickstarts/ReferenceServer/" \
- --set opcua.discoveryUrls[1]="opc.tcp://:/Quickstarts/ReferenceServer/" \
- # --set opcua.mountCertificates='true'
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.name=akri-opcua-monitoring \
+ --set opcua.configuration.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
+ --set opcua.configuration.brokerProperties.IDENTIFIER='Thermometer_Temperature' \
+ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:/Quickstarts/ReferenceServer/" \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[1]="opc.tcp://:/Quickstarts/ReferenceServer/" \
+ # --set opcua.configuration.mountCertificates='true'
```
Akri Agent will discover the two Servers and create an Instance for each Server. Watch two broker pods spin up, one
- for each Server.
- For MicroK8s
+ for each Server. For MicroK8s
```sh
watch microk8s kubectl get pods -o wide
```
@@ -241,8 +244,8 @@ in the OPC UA Servers.
```
1. Navigate in your browser to http://ip-address:32624/ where ip-address is the IP address of your Ubuntu VM (not the
cluster-IP) and the port number is from the output of `kubectl get services`. It takes 3 seconds for the site to
- load, after which, you should see a log of the temperature values, which updates every few seconds. Note how the values
- are coming from two different DiscoveryURLs, namely the ones for each of the two OPC UA Servers.
+ load, after which, you should see a log of the temperature values, which updates every few seconds. Note how the
+ values are coming from two different DiscoveryURLs, namely the ones for each of the two OPC UA Servers.
## Clean up
1. Delete the anomaly detection application deployment and service.
@@ -280,8 +283,12 @@ the advantages of Akri. This section will cover:
1. Creating a new OPC UA Configuration
### Adding a Node to the cluster
-To see how Akri easily scales as nodes are added to the cluster, add another node to your (K3s, MicroK8s, or vanilla Kubernetes) cluster.
-1. If you are using MicroK8s, create another MicroK8s instance, following the same steps as in [Setting up a single-node cluster](#setting-up-a-single-node-cluster) above. Then, in your first VM that is currently running Akri, get the join command by running `microk8s add-node`. In your new VM, run one of the join commands outputted in the previous step.
+To see how Akri easily scales as nodes are added to the cluster, add another node to your (K3s, MicroK8s, or vanilla
+Kubernetes) cluster.
+1. If you are using MicroK8s, create another MicroK8s instance, following the same steps as in [Setting up a single-node
+ cluster](#setting-up-a-single-node-cluster) above. Then, in your first VM that is currently running Akri, get the
+ join command by running `microk8s add-node`. In your new VM, run one of the join commands outputted in the previous
+ step.
1. Confirm that you have successfully added a node to the cluster by running the following in your control plane VM:
```sh
kubectl get no
@@ -294,18 +301,20 @@ To see how Akri easily scales as nodes are added to the cluster, add another nod
```
1. Let's play around with the capacity value and use the `helm upgrade` command to modify our OPC UA Monitoring
Configuration such that the capacity is 2. On the control plane node, run the following, once again uncommenting
- `--set opcua.mountCertificates='true'` if using security. Watch as the broker terminates and then four come online in
- a Running state.
+ `--set opcua.configuration.mountCertificates='true'` if using security. Watch as the broker terminates and then four
+ come online in a Running state.
```sh
- helm upgrade akri akri-helm-charts/akri \
- --set opcua.enabled=true \
- --set opcua.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
- --set opcua.brokerPod.env.IDENTIFIER='Thermometer_Temperature' \
- --set opcua.brokerPod.env.NAMESPACE_INDEX='2' \
- --set opcua.discoveryUrls[0]="opc.tcp://:/Quickstarts/ReferenceServer/" \
- --set opcua.discoveryUrls[1]="opc.tcp://:/Quickstarts/ReferenceServer/" \
- --set opcua.capacity=2 \
- # --set opcua.mountCertificates='true'
+ helm upgrade akri akri-helm-charts/akri-dev \
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.name=akri-opcua-monitoring \
+ --set opcua.configuration.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
+ --set opcua.configuration.brokerProperties.IDENTIFIER='Thermometer_Temperature' \
+ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:/Quickstarts/ReferenceServer/" \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[1]="opc.tcp://:/Quickstarts/ReferenceServer/" \
+ --set opcua.capacity=2 \
+ # --set opcua.configuration.mountCertificates='true'
```
For MicroK8s
```sh
@@ -315,12 +324,13 @@ To see how Akri easily scales as nodes are added to the cluster, add another nod
```sh
watch kubectl get pods,akrii -o wide
```
-1. Once you are done using Akri, you can remove your worker node from the cluster. For MicroK8s this is done by running on the worker node:
+1. Once you are done using Akri, you can remove your worker node from the cluster. For MicroK8s this is done by running
+ on the worker node:
```sh
microk8s leave
```
- Then, to complete the node removal, on the host run the following, inserting the name of the worker node (you can look it
- up with `microk8s kubectl get no`):
+ Then, to complete the node removal, on the host run the following, inserting the name of the worker node (you can
+ look it up with `microk8s kubectl get no`):
```sh
microk8s remove-node
```
@@ -351,16 +361,16 @@ Replace "Windows host IP address" with the IP address of the Windows machine you
the servers). Be sure to uncomment mounting certificates if you are enabling security:
```sh
helm install akri akri-helm-charts/akri-dev \
- --set opcua.enabled=true \
- --set opcua.name=akri-opcua-monitoring \
- --set opcua.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
- --set opcua.brokerPod.env.IDENTIFIER='Thermometer_Temperature' \
- --set opcua.brokerPod.env.NAMESPACE_INDEX='2' \
- --set opcua.discoveryUrls[0]="opc.tcp://:4840/" \
- # --set opcua.mountCertificates='true'
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.name=akri-opcua-monitoring \
+ --set opcua.configuration.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
+ --set opcua.configuration.brokerProperties.IDENTIFIER='Thermometer_Temperature' \
+ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:4840/" \
+ # --set opcua.configuration.mountCertificates='true'
```
-You can watch as an Instance is created for each Server and two broker pods are spun up.
-For MicroK8s
+You can watch as an Instance is created for each Server and two broker pods are spun up. For MicroK8s
```sh
watch microk8s kubectl get pods,akrii -o wide
```
@@ -376,27 +386,29 @@ specified by UA Specification 12). For example, to discover all servers register
server named "SomeServer0", do the following.
```bash
helm install akri akri-helm-charts/akri-dev \
- --set opcua.enabled=true \
- --set opcua.name=akri-opcua-monitoring \
- --set opcua.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
- --set opcua.brokerPod.env.IDENTIFIER='Thermometer_Temperature' \
- --set opcua.brokerPod.env.NAMESPACE_INDEX='2' \
- --set opcua.discoveryUrls[0]="opc.tcp://:4840/" \
- --set opcua.applicationNames.action=Exclude \
- --set opcua.applicationNames.items[0]="SomeServer0" \
- # --set opcua.mountCertificates='true'
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.name=akri-opcua-monitoring \
+ --set opcua.configuration.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
+ --set opcua.configuration.brokerProperties.IDENTIFIER='Thermometer_Temperature' \
+ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:4840/" \
+ --set opcua.configuration.discoveryDetails.applicationNames.action=Exclude \
+ --set opcua.configuration.discoveryDetails.applicationNames.items[0]="SomeServer0" \
+ # --set opcua.configuration.mountCertificates='true'
```
Alternatively, to only discover the server named "SomeServer0", do the following:
```bash
helm install akri akri-helm-charts/akri-dev \
- --set opcua.enabled=true \
- --set opcua.name=akri-opcua-monitoring \
- --set opcua.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
- --set opcua.brokerPod.env.IDENTIFIER='Thermometer_Temperature' \
- --set opcua.brokerPod.env.NAMESPACE_INDEX='2' \
- --set opcua.discoveryUrls[0]="opc.tcp://:4840/" \
- --set opcua.applicationNames.action=Include \
- --set opcua.applicationNames.items[0]="SomeServer0" \
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.name=akri-opcua-monitoring \
+ --set opcua.configuration.brokerPod.image.repository="ghcr.io/deislabs/akri/opcua-monitoring-broker" \
+ --set opcua.configuration.brokerProperties.IDENTIFIER='Thermometer_Temperature' \
+ --set opcua.configuration.brokerProperties.NAMESPACE_INDEX='2' \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:4840/" \
+ --set opcua.configuration.discoveryDetails.applicationNames.action=Include \
+ --set opcua.configuration.discoveryDetails.applicationNames.items[0]="SomeServer0" \
# --set opcua.mountCertificates='true'
```
### Creating a different broker and end application
@@ -405,19 +417,20 @@ Variable for anomalies. The workload or broker you want to deploy to discovered
Servers' address spaces are widely varied, so the options for broker implementations are endless. Passing the NodeID
`Identifier` and `NamespaceIndex` as environment variables may still suit your needs; however, if targeting one NodeID
is too limiting or irrelevant, instead of passing a specific NodeID to your broker Pods, you could specify any other
-environment variables via `--set opcua.brokerPod.env.KEY='VALUE'`. Or, your broker may not need additional information
-passed to it at all. Decide whether to pass environment variables, what servers to discover, and set the broker pod
-image to be your container image, say `ghcr.io//opcua-broker`.
+environment variables via `--set opcua.configuration.brokerProperties.KEY='VALUE'`. Or, your broker may not need
+additional information passed to it at all. Decide whether to pass environment variables, what servers to discover, and
+set the broker pod image to be your container image, say `ghcr.io//opcua-broker`.
```sh
helm repo add akri-helm-charts https://deislabs.github.io/akri/
helm install akri akri-helm-charts/akri-dev \
- --set opcua.enabled=true \
- --set opcua.discoveryUrls[0]="opc.tcp://:/" \
- --set opcua.discoveryUrls[1]="opc.tcp://:/" \
- --set opcua.brokerPod.image.repository='ghcr.io//opcua-broker'
- # --set opcua.mountCertificates='true'
+ --set opcua.discovery.enabled=true \
+ --set opcua.configuration.enabled=true \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[0]="opc.tcp://:/" \
+ --set opcua.configuration.discoveryDetails.discoveryUrls[1]="opc.tcp://:/" \
+ --set opcua.configuration.brokerPod.image.repository='ghcr.io//opcua-broker'
+ # --set opcua.configuration.mountCertificates='true'
```
-> Note: set `opcua.brokerPod.image.tag` to specify an image tag (defaults to `latest`).
+> Note: set `opcua.configuration.brokerPod.image.tag` to specify an image tag (defaults to `latest`).
Now, your broker will be deployed to all discovered OPC UA servers. Next, you can create a Kubernetes deployment for
your own end application like [anomaly-detection-app.yaml](../deployment/samples/akri-anomaly-detection-app.yaml) and
@@ -425,11 +438,11 @@ apply it to your Kubernetes cluster.
### Creating a new OPC UA Configuration
Helm allows us to parametrize the commonly modified fields in our Configuration files, and we have provided many. Run
-`helm inspect values akri-helm-charts/akri` to see what values of the generic OPC UA Configuration can be customized,
-such as the Configuration and Instance `ServiceSpec`s, `capacity`, and broker `PodSpec`. We saw in the previous section
-how broker Pod environment variables can be specified via `--set opcua.brokerPod.env.KEY='VALUE'`. For more advanced
-configuration changes that are not aided by the generic OPC UA Configuration Helm chart, such as credentials naming, we
-suggest downloading the OPC UA Configuration file using Helm and then manually modifying it. See the documentation on
-[customizing an Akri
+`helm inspect values akri-helm-charts/akri-dev` to see what values of the generic OPC UA Configuration can be
+customized, such as the Configuration and Instance `ServiceSpec`s, `capacity`, and broker `PodSpec`. We saw in the
+previous section how broker Pod environment variables can be specified via `--set
+opcua.configuration.brokerProperties.KEY='VALUE'`. For more advanced configuration changes that are not aided by the
+generic OPC UA Configuration Helm chart, such as credentials naming, we suggest downloading the OPC UA Configuration
+file using Helm and then manually modifying it. See the documentation on [customizing an Akri
installation](./customizing-akri-installation.md#generating-modifying-and-applying-a-custom-configuration) for more
details.
\ No newline at end of file
diff --git a/docs/roadmap.md b/docs/roadmap.md
index 90def157b..f8b07a7e2 100644
--- a/docs/roadmap.md
+++ b/docs/roadmap.md
@@ -3,13 +3,13 @@
There are endless sensors, controllers, and MCU class devices on the edge and each type of device has a different
discovery protocol. Akri is an interface for helping expose those devices as resources to your Kubernetes cluster on the
edge. Before it can add a device as a cluster resource, Akri must first discover the device using the appropriate
-protocol. Akri currently supports several protocols and was built in a modular way so as to continually support more.
+Discovery Handler. Akri currently supports several Discovery Handlers and was built in a modular way so as to continually support more.
The question is, which protocols should Akri prioritize? We are looking for community feedback to make this decision. If
there is a protocol that you would like implemented, check our [Issues](https://github.com/deislabs/akri/issues) to see
if that protocol has been requested, and thumbs up it so we know you, too, would like it implemented. If there is no
existing request for your protocol, create a [new feature request](https://github.com/deislabs/akri/issues/new/choose).
-Rather than waiting for it to be prioritized, you could implement discovery via that protocol in Agent. See [the
-extensibility document](./extensibility.md) for more details.
+Rather than waiting for it to be prioritized, you could implement a Discovery Handler for that protocol. See [the
+Discovery Handler development document](./discovery-handler-development.md) for more details.
### Currently supported Discovery Handlers
1. ONVIF (to discover IP cameras)
diff --git a/docs/setting-up-cluster.md b/docs/setting-up-cluster.md
new file mode 100644
index 000000000..00f2e0089
--- /dev/null
+++ b/docs/setting-up-cluster.md
@@ -0,0 +1,98 @@
+# Setting up your cluster
+Before deploying Akri, you must have a Kubernetes cluster (v1.16 or higher) running with `kubectl` and `Helm` installed. Akri is Kubernetes native, so it should run on most Kubernetes distributions. All of our end-to-end tests run on vanilla Kubernetes, K3s, and MicroK8s clusters. This documentation will walk through how to set up a cluster using one of those three distributions.
+
+>Note: All nodes must be Linux on amd64, arm64v8, or arm32v7.
+
+## Set up a standard Kubernetes cluster
+1. Reference [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) for instructions on how to install Kubernetes.
+1. Install Helm for deploying Akri.
+ ```sh
+ sudo apt install -y curl
+ curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
+ ```
+
+> Note: To enable workloads on a single-node cluster, remove the master taint.
+> ```sh
+> kubectl taint nodes --all node-role.kubernetes.io/master-
+> ```
+
+## Set up a K3s cluster
+1. Install [K3s](https://k3s.io/)
+ ```sh
+ curl -sfL https://get.k3s.io | sh -
+ ```
+
+ >Note: Optionally specify a version with the `INSTALL_K3S_VERSION` env var as follows: `curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.18.9+k3s1 sh -`
+1. Grant admin privilege to access kube config.
+ ```sh
+ sudo addgroup k3s-admin
+ sudo adduser $USER k3s-admin
+ sudo usermod -a -G k3s-admin $USER
+ sudo chgrp k3s-admin /etc/rancher/k3s/k3s.yaml
+ sudo chmod g+r /etc/rancher/k3s/k3s.yaml
+ su - $USER
+ ```
+1. Check K3s status.
+ ```sh
+ kubectl get node
+ ```
+1. Install Helm.
+ ```sh
+ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
+ sudo apt install -y curl
+ curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
+ ```
+1. Akri depends on crictl to track some Pod information. If using K3s versions 1.19 or greater, install crictl locally (note: there are no known version limitations, any crictl version is expected to work). Previous K3s versions come when crictl embedded.
+ ```sh
+ VERSION="v1.17.0"
+ curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
+ sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
+ rm -f crictl-$VERSION-linux-amd64.tar.gz
+ ```
+1. Configure Akri to use the crictl path and K3s containerd socket. This `AKRI_HELM_CRICTL_CONFIGURATION` environment variable should be added to all Akri Helm installations.
+ ```sh
+ export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/run/k3s/containerd/containerd.sock"
+ ```
+1. Add nodes to your cluster by running the K3s installation script with the `K3S_URL` and `K3S_TOKEN` environment variables. See [K3s installation documentation](https://rancher.com/docs/k3s/latest/en/quick-start/#install-script) for more details.
+
+## Set up a MicroK8s cluster
+1. Install [MicroK8s](https://microk8s.io/docs).
+ ```sh
+ sudo snap install microk8s --classic --channel=1.19/stable
+ ```
+1. Grant admin privilege for running MicroK8s commands.
+ ```sh
+ sudo usermod -a -G microk8s $USER
+ sudo chown -f -R $USER ~/.kube
+ su - $USER
+ ```
+1. Check MicroK8s status.
+ ```sh
+ microk8s status --wait-ready
+ ```
+1. Enable CoreDNS, Helm and RBAC for MicroK8s.
+ ```sh
+ microk8s enable dns helm3 rbac
+ ```
+1. If you don't have an existing `kubectl` and `helm` installations, add aliases. If you do not want to set an alias, add `microk8s` in front of all `kubectl` and `helm` commands.
+ ```sh
+ alias kubectl='microk8s kubectl'
+ alias helm='microk8s helm3'
+ ```
+1. By default, MicroK8s does not allow Pods to run in a privileged context. None of Akri's components run privileged; however, if your custom broker Pods do in order to access devices for example, enable privileged Pods like so:
+ ```sh
+ echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kube-apiserver
+ microk8s.stop
+ microk8s.start
+ ```
+1. Akri depends on crictl to track some Pod information. MicroK8s does not install crictl locally, so crictl must be installed and the Akri Helm chart needs to be configured with the crictl path and MicroK8s containerd socket.
+ ```sh
+ # Note that we aren't aware of any version restrictions
+ VERSION="v1.17.0"
+ curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
+ sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
+ rm -f crictl-$VERSION-linux-amd64.tar.gz
+
+ export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/var/snap/microk8s/common/run/containerd.sock"
+ ```
+1. To add additional nodes to the cluster, reference [MicroK8's documentation](https://microk8s.io/docs/clustering).
diff --git a/docs/user-guide.md b/docs/user-guide.md
index 1572ff25a..3f4088d07 100644
--- a/docs/user-guide.md
+++ b/docs/user-guide.md
@@ -49,59 +49,11 @@ To see which version of the **akri** and **akri-dev** Helm charts are stored loc
To grab the latest Akri Helm charts, run `helm repo update`.
### Setting up your cluster
-Before deploying Akri, you must have a Kubernetes, K3s, or MicroK8s cluster (v1.16 or higher) running with `kubectl` support installed. All nodes must be Linux. All of the Akri component containers are currently built for amd64, arm64v8, or arm32v7, so all nodes must have one of these platforms.
-1. Install Helm
- ```sh
- curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
- ```
-1. Provide runtime-specific configuration to enable Akri and Helm
-
- 1. If using **K3s**, point to `kubeconfig` for Helm, install crictl, and configure Akri to use K3s' CRI socket.
- ```sh
- # Install crictl locally (note: there are no known version limitations, any crictl version is expected to work).
- # This step is not necessary if using a K3s version below 1.19, in which case K3s' embedded crictl can be used.
- VERSION="v1.17.0"
- curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
- sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
- rm -f crictl-$VERSION-linux-amd64.tar.gz
-
- # Helm uses $KUBECONFIG to find the Kubernetes configuration
- export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
-
- # Configure Akri to use K3s' embedded crictl and CRI socket
- export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/run/k3s/containerd/containerd.sock"
- ```
- 1. If using **MicroK8s**, enable CoreDNS, RBAC (optional), and Helm. If your broker Pods must run privileged, enable
- privileged Pods. Also, install crictl, and configure Akri to use MicroK8s' CRI socket.
- ```sh
- # Enable CoreDNS, RBAC and Helm
- microk8s enable dns rbac helm3
-
- # Optionally enable privileged pods (if your broker Pods must run privileged) and restart MicroK8s.
- echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kube-apiserver
- sudo microk8s stop && microk8s start
-
- # Install crictl locally (note: there are no known version
- # limitations, any crictl version is expected to work)
- VERSION="v1.17.0"
- curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-amd64.tar.gz --output crictl-${VERSION}-linux-amd64.tar.gz
- sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
- rm -f crictl-$VERSION-linux-amd64.tar.gz
-
- # Configure Akri to use MicroK8s' CRI socket
- export AKRI_HELM_CRICTL_CONFIGURATION="--set agent.host.crictl=/usr/local/bin/crictl --set agent.host.dockerShimSock=/var/snap/microk8s/common/run/containerd.sock"
- ```
- If you don't have existing kubectl and helm installations, you can add aliases. If you do not want to set an
- alias, add microk8s in front of all kubectl and helm commands.
- ```sh
- alias kubectl='microk8s kubectl'
- alias helm='microk8s helm3'
- ```
- 1. If using **Kubernetes**, Helm and crictl do not require additional configuration.
+Before deploying Akri, you must have a Kubernetes cluster (v1.16 or higher) running with `kubectl` and `Helm` installed. Reference our [cluster setup documentation](./setting-up-cluster.md) to set up a cluster or adapt your currently existing cluster. Akri currently supports Linux Nodes on amd64, arm64v8, or arm32v7.
### Installing Akri Flow
Akri is installed using its Helm Chart, which contains settings for deploying the Akri Agents, Controller, Discovery Handlers, and Configurations. All these can be installed in one command, in several different Helm installations, or via consecutive `helm upgrades`. This section will focus on the latter strategy, helping you construct your Akri installation command, assuming you have already decided what you want Akri to discover.
-Akri's Helm chart deploys the Akri Controller and Agent by default, so you only need to specify which Discovery Handlers and Configurations need to be deployed in your command. Akri discovers devices via Discovery Handlers, which are often protocol implementations. Akri currently supports three Discovery Handlers (udev, OPC UA and ONVIF); however, custom discovery handlers can be created and deployed as explained in Akri's [extensibility document](./extensibility.md). Akri is told what to discover via Akri Configurations, which specify the name of the Discovery Handler that should be used, any discovery details (such as filters) that need to be passed to the Discovery Handler, and optionally any broker Pods and services that should be created upon discovery. For example, the ONVIF Discovery Handler can receive requests to include or exclude cameras with certain IP addresses.
+Akri's Helm chart deploys the Akri Controller and Agent by default, so you only need to specify which Discovery Handlers and Configurations need to be deployed in your command. Akri discovers devices via Discovery Handlers, which are often protocol implementations. Akri currently supports three Discovery Handlers (udev, OPC UA and ONVIF); however, custom discovery handlers can be created and deployed as explained in Akri's [Discovery Handler development document](./discovery-handler-development.md). Akri is told what to discover via Akri Configurations, which specify the name of the Discovery Handler that should be used, any discovery details (such as filters) that need to be passed to the Discovery Handler, and optionally any broker Pods and services that should be created upon discovery. For example, the ONVIF Discovery Handler can receive requests to include or exclude cameras with certain IP addresses.
Let's walk through building an Akri installation command:
@@ -109,7 +61,7 @@ Let's walk through building an Akri installation command:
```sh
helm repo add akri-helm-charts https://deislabs.github.io/akri/
```
-2. Install Akri's Controller and Agent, specifying the crictl configuration from [prerequisites above](#setting-up-your-cluster) in not using vanilla Kubernetes:
+2. Install Akri's Controller and Agent, specifying the crictl configuration from [the cluster setup steps](./setting-up-cluster.md) in not using vanilla Kubernetes:
```sh
helm install akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION
@@ -124,11 +76,11 @@ Let's walk through building an Akri installation command:
> Note: To install a full Agent with embedded udev, OPC UA, and ONVIF Discovery Handlers, set `agent.full=true` instead of enabling the Discovery Handlers. Note, this we restart the
> Agent Pods.
> ```sh
- > helm upgrade akri akri-helm-charts/akri \
+ > helm upgrade akri akri-helm-charts/akri-dev \
> --set agent.full=true
> ```
-4. Upgrade the installation to apply a Configuration, which requests discovery of certain devices by a Discovery Handler. A Configuration is applied by setting `.configuration.enabled`. While some Configurations may not require any discovery details to be set, oftentimes setting details is preferable for narrowing the Discovery Handlers' search. These are set under `.configuration.discoveryDetails`. For example, udev rules are passed to the udev Discovery Handler to specify which devices in the Linux device file system it should search for by setting `udev.configuration.discoveryDetails.udevRules`. Akri can be instructed to automatically deploy workloads called "brokers" to each discovered device by setting a broker Pod image in a Configuration via `--set .configuration.brokerPod.image.repository=`.
+4. Upgrade the installation to apply a Configuration, which requests discovery of certain devices by a Discovery Handler. A Configuration is applied by setting `.configuration.enabled`. While some Configurations may not require any discovery details to be set, oftentimes setting details is preferable for narrowing the Discovery Handlers' search. These are set under `.configuration.discoveryDetails`. For example, udev rules are passed to the udev Discovery Handler to specify which devices in the Linux device file system it should search for by setting `udev.configuration.discoveryDetails.udevRules`. Akri can be instructed to automatically deploy workloads called "brokers" to each discovered device by setting a broker Pod image in a Configuration via `--set .configuration.brokerPod.image.repository=`. Learn more about creating brokers in the [broker development document](./broker-development.md).
```sh
helm upgrade akri akri-helm-charts/akri-dev \
--set .discovery.enabled=true \