Skip to content

Commit

Permalink
Merge pull request #2866 from ncdc/docs
Browse files Browse the repository at this point in the history
🌱 minor docs improvements
  • Loading branch information
openshift-merge-robot authored Mar 3, 2023
2 parents 749788f + f0e994a commit e1353f4
Show file tree
Hide file tree
Showing 22 changed files with 296 additions and 176 deletions.
1 change: 0 additions & 1 deletion .github/workflows/docs-gen-and-push.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ on:
- "docs/**"
- "pkg/**"
- ".github/workflows/docs-gen-and-push.yaml"
- "hack/deploy-docs.sh"

permissions:
contents: write
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -174,12 +174,12 @@ REQUIREMENTS_TXT=docs/requirements.txt
.PHONY: serve-docs
serve-docs: venv
. $(VENV)/activate; \
VENV=$(VENV) REMOTE=$(REMOTE) BRANCH=$(BRANCH) hack/serve-docs.sh
VENV=$(VENV) REMOTE=$(REMOTE) BRANCH=$(BRANCH) docs/scripts/serve-docs.sh

.PHONY: deploy-docs
deploy-docs: venv
. $(VENV)/activate; \
REMOTE=$(REMOTE) BRANCH=$(BRANCH) hack/deploy-docs.sh
REMOTE=$(REMOTE) BRANCH=$(BRANCH) docs/scripts/deploy-docs.sh

vendor: ## Vendor the dependencies
go mod tidy
Expand Down
5 changes: 5 additions & 0 deletions docs/content/concepts/apis-in-kcp.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
description: >
What APIs come standard, how to share APIs with others, how to consume shared APIs.
---

# APIs in kcp

## Overview
Expand Down
2 changes: 1 addition & 1 deletion docs/content/concepts/authorization.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to authorize requests to kcp
How to authorize requests to kcp.
---

# Authorization
Expand Down
2 changes: 1 addition & 1 deletion docs/content/concepts/cluster-mapper.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to use the cluster mapper
How to use the cluster mapper.
---

# Cluster Mapper
Expand Down
117 changes: 2 additions & 115 deletions docs/content/concepts/index.md
Original file line number Diff line number Diff line change
@@ -1,116 +1,3 @@
---
description: >
Contains the definitions shared across design documents around prototyping a kube-like control plane (in KCP). This is
a derivative work of other design documents intended to frame terminology. All future statements that may be changed by
designs is covered by those designs, and not duplicated here.
---
# kcp concepts

# Terminology for kcp

## Logical cluster

A logical cluster is a way to subdivide a single kube-apiserver + etcd storage into multiple clusters (different APIs,
separate semantics for access, policy, and control) without requiring multiple instances. A logical cluster is a
mechanism for achieving separation, but may be modelled differently in different use cases. A logical cluster is
similar to a virtual cluster as defined by sig-multicluster, but is able to amortize the cost of a new cluster to be
zero or near-zero memory and storage so that we can create tens of millions of empty clusters cheaply.

A logical cluster is a storage level concept that adds an additional attribute to an object’s identifier on a
kube-apiserver. Regular servers identify objects by (group, version, resource, optional namespace, name). A logical
cluster enriches an identifier: (group, version, resource, **logical cluster name**, optional namespace, name).

## Workload Cluster

A physical cluster is a “real Kubernetes cluster”, i.e. one that can run Kubernetes workloads and accepts standard
Kubernetes API objects. For the near term, it is assumed that a physical cluster is a distribution of Kubernetes and
passes the conformance tests and exposes the behavior a regular Kubernetes admin or user expects.

## Workspace

A workspace models a set of user-facing APIs for CRUD. Each workspace is backed by a logical cluster, but not all
logical clusters may be exposed as workspaces. Creating a Workspace object results in a logical cluster being available
via a URL for the client to connect and create resources supported by the APIs in that workspace. There could be
multiple different models that result in logical clusters being created, with different policies or lifecycles, but
Workspace is intended to be the most generic representation of the concept with the broadest possible utility to anyone
building control planes.

A workspace binds APIs and makes them accessible inside the logical cluster, allocates capacity for creating instances
of those APIs (quota), and defines how multi-workspace operations can be performed by users, clients, and controller
integrations.

To a user, a workspace appears to be a Kubernetes cluster minus all the container orchestration specific resources. It
has its own discovery, its own OpenAPI spec, and follows the kube-like constraints about uniqueness of
Group-Version-Resource and its behaviour (no two GVRs with different schemas can exist per workspace, but workspaces can
have different schemas). A user can define a workspace as a context in a kubeconfig file and `kubectl get all -A` would
return all objects in all namespaces of that workspace.

Workspace naming is chosen to be aligned with the Kubernetes Namespace object - a Namespace subdivides a workspace by
name, a workspace subdivides the universe into chunks of meaningful work.

Workspaces are the containers for all API objects, so users orient by viewing lists of workspaces from APIs.

## Workspace type

Workspaces have types, which are mostly oriented around a set of default or optional APIs exposed. For instance, a
workspace intended for use deploying Kube applications might expose the same API objects a user would encounter on a
physical cluster. A workspace intended for building functions might expose only the knative serving APIs, config maps
and secrets, and optionally enable knative eventing APIs.

At the current time there is no decision on whether a workspace type represents an inheritance or composition model,
although in general we prefer composition approaches. We also do not have a fully resolved design.

## Virtual Workspace

An API object has one source of truth (is stored transactionally in one system), but may be exposed to different use
cases with different fields or schemas. Since a workspace is the user facing interaction with an API object, if we want
to deal with Workspaces in aggregate, we need to be able to list them. Since a user may have access to workspaces in
multiple different contexts, or for different use cases (a workspace that belongs to the user personally, or one that
belongs to a business organization), the list of “all workspaces” itself needs to be exposed as an API object to an end
user inside a workspace. That workspace is “virtual” - it adapts or transforms the underlying source of truth for the
object and potentially the schema the user sees.

## Index (e.g. Workspace Index)

An index is the authoritative list of a particular API in their source of truth across the system. For instance, in
order for a user to see all the workspaces they have available, they must consult the workspace index to return a list
of their workspaces. It is expected that indices are suitable for consistent LIST/WATCHing (in the kubernetes sense) so
that integrations can be built to view the list of those objects.

Index in the control plane sense should not be confused with secondary indices (in the database sense), which may be
used to enable a particular index.

## Shard

A failure domain within the larger control plane service that cuts across the primary functionality. Most distributed
systems must separate functionality across shards to mitigate failures, and typically users interact with shards through
some transparent serving infrastructure. Since the primary problem of building distributed systems is reasoning about
failure domains and dependencies across them, it is critical to allow operators to effectively match shards, understand
dependencies, and bring them together.

A control plane should be shardable in a way that maximizes application SLO - gives users a tool that allows them to
better define their applications not to fail.

## API Binding

The act of associating a set of APIs with a given logical cluster. The Workspace model defines one particular
implementation of the lifecycle of a logical cluster and the APIs within it. Because APIs and the implementations that
back an API evolve over time, it is important that the binding be introspectable and orchestrate-able - that a consumer
can provide a rolling deployment of a new API or new implementation across hundreds or thousands of workspaces.

There are likely a few objects involved in defining the APIs exposed within a workspace, but in general they probably
define a spec (which APIs / implementations to associate with) and a status (the chosen APIs / implementations that are
currently bound), allow a user to bulk associate APIs (i.e. multiple APIs at the same time, like “all knative serving
APIs”), and may be defaulted based on some attributes of a workspace type (all workspaces of this “type” get the default
Kube APIs, this other “type” get the knative apis).

The evolution of an API within a workspace and across workspaces is of key importance.

## Syncer

A syncer is installed on a SyncTarget and is responsible for synchronizing data between kcp and that cluster.

## Location

A collection of SyncTargets that describe runtime characteristics that allow placement of applications.
Characteristics are not limited but could describe things like GPU, supported storage, compliance or
regulatory fulfillment, or geographical placement.
{% include "partials/section-overview.html" %}
2 changes: 1 addition & 1 deletion docs/content/concepts/kubectl-kcp-plugin.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to use the kubectl kcp plugin
How to use the kubectl kcp plugin.
---

# kubectl kcp plugin
Expand Down
4 changes: 2 additions & 2 deletions docs/content/concepts/partitions.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to create shard partitions
How to create shard partitions.
---

# Partition API
Expand Down Expand Up @@ -69,4 +69,4 @@ status:

It is to note that a `Partition` is created only if it matches at least one shard. With the provided example if there is no shard in the cloud provider `aliyun` in the region `europe` no `Partition` will be created for it.

An example of a `Partition` generated by this `PartitionSet` can be found above. The `dimensions` are translated into `matchLabels` with values specific to each `Partition`. An owner reference of the `Partition` will be set to the `PartitionSet`.
An example of a `Partition` generated by this `PartitionSet` can be found above. The `dimensions` are translated into `matchLabels` with values specific to each `Partition`. An owner reference of the `Partition` will be set to the `PartitionSet`.
2 changes: 1 addition & 1 deletion docs/content/concepts/quickstart-tenancy-and-apis.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to create a new API and use it with tenancy
How to create a new API and use it with tenancy.
---

# Quickstart: Tenancy and APIs
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: >
How to register Kubernetes clusters using syncer
How to register Kubernetes clusters using syncer.
---

# Registering Kubernetes Clusters using syncer
Expand Down Expand Up @@ -134,7 +134,7 @@ kubectl kcp bind compute <workspace of synctarget> --location-selectors=env=test
```

this command will create a `Placement` selecting a `Location` with label `env=test` and bind the selected `Location` to namespaces with
label `purpose=workload`. See more details of placement and location [here](locations-and-scheduling.md)
label `purpose=workload`. See more details of placement and location [here](placement-locations-and-scheduling.md)

### Running a workload

Expand Down
Loading

0 comments on commit e1353f4

Please sign in to comment.