Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Format workspaces description and correct unclear words #3547

Merged
merged 1 commit into from
Nov 23, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 25 additions & 25 deletions docs/workspaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ completes.
`Workspaces` are similar to `Volumes` except that they allow a `Task` author
to defer to users and their `TaskRuns` when deciding which class of storage to use.

Workspaces can serve the following purposes:
`Workspaces` can serve the following purposes:

- Storage of inputs and/or outputs
- Sharing data among `Tasks`
Expand All @@ -44,7 +44,7 @@ Workspaces can serve the following purposes:
- A mount point for common tools shared by an organization
- A cache of build artifacts that speed up jobs

### Workspaces in `Tasks` and `TaskRuns`
### `Workspaces` in `Tasks` and `TaskRuns`

`Tasks` specify where a `Workspace` resides on disk for its `Steps`. At
runtime, a `TaskRun` provides the specific details of the `Volume` that is
Expand All @@ -58,9 +58,9 @@ data for the `Task` to process. In both scenarios the `Task's`
`Workspace` declaration remains the same and only the runtime
information in the `TaskRun` changes.

Tasks can also share Workspaces with their Sidecars, though there's a little more
`Tasks` can also share `Workspaces` with their `Sidecars`, though there's a little more
configuration involved to add the required `volumeMount`. This allows for a
long-running process in a Sidecar to share data with the executing Steps of a Task.
long-running process in a `Sidecar` to share data with the executing `Steps` of a `Task`.

### `Workspaces` in `Pipelines` and `PipelineRuns`

Expand All @@ -78,13 +78,13 @@ provide can be safely and correctly shared across multiple `Tasks`.

### Optional `Workspaces`

Both Tasks and Pipelines can declare a Workspace "optional". When an optional Workspace
is declared the TaskRun or PipelineRun may omit a Workspace Binding for that Workspace.
The Task or Pipeline behaviour may change when the Binding is omitted. This feature has
Both `Tasks` and `Pipelines` can declare a `Workspace` "optional". When an optional `Workspace`
is declared the `TaskRun` or `PipelineRun` may omit a `Workspace` Binding for that `Workspace`.
The `Task` or `Pipeline` behaviour may change when the Binding is omitted. This feature has
many uses:

- A Task may optionally accept credentials to run authenticated commands.
- A Pipeline may accept optional configuration that changes the linting or compilation
- A `Task` may optionally accept credentials to run authenticated commands.
- A `Pipeline` may accept optional configuration that changes the linting or compilation
parameters used.
- An optional build cache may be provided to speed up compile times.

Expand Down Expand Up @@ -168,24 +168,24 @@ spec:
touch "$(workspaces.signals.path)/ready"
```

**Note:** Sidecars _must_ explicitly opt-in to receiving the Workspace volume. Injected Sidecars from
non-Tekton sources will not receive access to Workspaces.
**Note:** `Sidecars` _must_ explicitly opt-in to receiving the `Workspace` volume. Injected `Sidecars` from
non-Tekton sources will not receive access to `Workspaces`.

#### Setting a Default TaskRun Workspace Binding
#### Setting a default `TaskRun` `Workspace Binding`

An organization may want to specify default Workspace configuration for TaskRuns. This allows users to
use Tasks without having to know the specifics of Workspaces - they can simply rely on the platform
to use the default configuration when a Workspace is missing. To support this Tekton allows a default
Workspace Binding to be specified for TaskRuns. When the TaskRun executes, any Workspaces that a Task
requires but which are not provided by the TaskRun will be bound with the default configuration.
An organization may want to specify default `Workspace` configuration for `TaskRuns`. This allows users to
use `Tasks` without having to know the specifics of `Workspaces` - they can simply rely on the platform
to use the default configuration when a `Workspace` is missing. To support this Tekton allows a default
`Workspace Binding` to be specified for `TaskRuns`. When the `TaskRun` executes, any `Workspaces` that
a `Task` requires but which are not provided by the `TaskRun` will be bound with the default configuration.

The configuration for the default Workspace Binding is added to the `config-defaults` ConfigMap, under
The configuration for the default `Workspace Binding` is added to the `config-defaults` `ConfigMap`, under
the `default-task-run-workspace-binding` key. For an example, see the [Customizing basic execution
parameters](./install.md#customizing-basic-execution-parameters) section of the install doc.

**Note:** the default configuration is used for any _required_ Workspace declared by a Task. Optional
Workspaces are not populated with the default binding. This is because a Task's behaviour will typically
differ slightly when an optional Workspace is bound.
**Note:** the default configuration is used for any _required_ `Workspace` declared by a `Task`. Optional
`Workspaces` are not populated with the default binding. This is because a `Task's` behaviour will typically
differ slightly when an optional `Workspace` is bound.

#### Using `Workspace` variables in `Tasks`

Expand Down Expand Up @@ -250,7 +250,7 @@ you must add the following information to your `Pipeline` definition:
list must have a unique name.
- A mapping of `Workspace` names between the `Pipeline` and the `Task` definitions.

The example below defines a `Pipeline` with a single `Workspace` named `pipeline-ws1`. This
The example below defines a `Pipeline` with a `Workspace` named `pipeline-ws1`. This
`Workspace` is bound in two `Tasks` - first as the `output` workspace declared by the `gen-code`
`Task`, then as the `src` workspace declared by the `commit` `Task`. If the `Workspace`
provided by the `PipelineRun` is a `PersistentVolumeClaim` then these two `Tasks` can share
Expand Down Expand Up @@ -279,9 +279,9 @@ spec:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
```

Include a `subPath` in the workspace binding to mount different parts of the same volume for different Tasks. See [a full example of this kind of Pipeline](../examples/v1beta1/pipelineruns/pipelinerun-using-different-subpaths-of-workspace.yaml) which writes data to two adjacent directories on the same Volume.
Include a `subPath` in the `Workspace Binding` to mount different parts of the same volume for different Tasks. See [a full example of this kind of Pipeline](../examples/v1beta1/pipelineruns/pipelinerun-using-different-subpaths-of-workspace.yaml) which writes data to two adjacent directories on the same Volume.

The `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a Workspace with `subPath` of `/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `/bar` will end up mounting the `Volume`'s `/foo/bar` directory.
The `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a `Workspace` with `subPath` of `/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `/bar` will end up mounting the `Volume`'s `/foo/bar` directory.

#### Specifying `Workspace` order in a `Pipeline` and Affinity Assistants

Expand Down Expand Up @@ -461,7 +461,7 @@ substantially higher latency.
When using a workspace backed by a `PersistentVolumeClaim` (typically only available within a Data Center) and the `TaskRun`
pods can be scheduled to any Availability Zone in a regional cluster, some techniques must be used to avoid deadlock in the `Pipeline`.

Tekton provides an Affinity Assistant that schedules all TaskRun Pods sharing a `PersistentVolumeClaim` to the same
Tekton provides an Affinity Assistant that schedules all `TaskRun` Pods sharing a `PersistentVolumeClaim` to the same
Node. This avoids deadlocks that can happen when two Pods requiring the same Volume are scheduled to different Availability Zones.
A volume typically only lives within a single Availability Zone.

Expand Down