Skip to content

Commit

Permalink
Add "volume" PipelineResource 🔊
Browse files Browse the repository at this point in the history
This will allow copying content either into or out of a `TaskRun`,
either to an existing volume or a newly created volume. The immediate
use case is for copying a pipeline's workspace to be made available as
the input for another pipeline's workspace without needing to deal
with uploading everything to a bucket. The volume, whether already
existing or created, will not be deleted at the end of the
`PipelineRun`, unlike the artifact storage PVC.

The Volume resource is a sub-type of the general Storage resource.

Since this type will require the creation of a PVC to function (to be
configurable later), this commit adds a Setup interface that
PipelineResources can implement if they need to do setup that involves
instantiating objects in Kube. This could be a place to later add
features like caching, and also is the sort of design we'd expect once
PipelineResources are extensible (PipelineResources will be free to do
whatever setup they need).

The behavior of this volume resource is:
1. For inputs, copy data _from_ the PVC to the workspace path
2. For outputs, copy data _to_ the PVC from the workspace path

If a user does want to control where the data is copied from, they can:
1. Add a step that copies from the location they want to copy from on
   disk to /workspace/whatever
2. Use the "targetPath" argument in the PipelineResource to control the
   location the data is copied to (still relative to targetPath
   https://github.com/tektoncd/pipeline/blob/master/docs/resources.md#controlling-where-resources-are-mounted)
3. Use `path` https://github.com/tektoncd/pipeline/blob/master/docs/resources.md#overriding-where-resources-are-copied-from
   (tbd if we want to keep this feature post PVC)

The underlying PVC will need to be created by the Task reonciler, if
only a TaskRun is being used, or by the PipelineRun reconciler if a
Pipeline is being used. The PipelineRun reconciler cannot delegate this
to the TaskRun reconciler b/c when two different reconcilers create PVCs
and Tekton is running on a regional GKE cluster, they can get created in
different zones, resulting in a pod that tries to use both being
unschedulable.

In order to actually schedule a pod using two volume resources, we had
to:
- Use a storage class that can be scheduled in a GKE regional cluster
  https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd
- Either use the same storage class for the PVC attached automatically
  for input/output linking or don't use the PVC (chose the latter!)

This commit removes automatic PVC copying for input output linking of
the VolumeResource b/c since it itself is a PVC, there is no need to
copy between an intermediate PVCs. This makes it simpler to make a Task
using the VolumeResource schedulable, removes redundant copying, and
removes a side effect where if a VolumeResources output was linked to an
input, the Task with the input would see _only_ the changes made by the
output and none of the other contents of the PVC.

Also removing the docs on the `paths` param (i.e. "overriding where
resources are copied from") because it was implemented such that it
would only work in the output -> input linking PVC case and can't
actually be used by users and it will be removed in tektoncd#1284.

fixes tektoncd#1062

Co-authored-by: Dan Lorenc <lorenc.d@gmail.com>
Co-authored-by: Christie Wilson <bobcatfish@gmail.com>
  • Loading branch information
3 people committed Oct 10, 2019
1 parent ee0b72e commit 18bd7ff
Show file tree
Hide file tree
Showing 33 changed files with 1,842 additions and 192 deletions.
10 changes: 5 additions & 5 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,16 +124,16 @@ or a [GCS storage bucket](https://cloud.google.com/storage/)
The PVC option can be configured using a ConfigMap with the name
`config-artifact-pvc` and the following attributes:
- size: the size of the volume (5Gi by default)
- storageClassName: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
- `size`: the size of the volume (5Gi by default)
- `storageClassName`: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
The GCS storage bucket can be configured using a ConfigMap with the name
`config-artifact-bucket` with the following attributes:
- location: the address of the bucket (for example gs://mybucket)
- bucket.service.account.secret.name: the name of the secret that will contain
- `location`: the address of the bucket (for example gs://mybucket)
- `bucket.service.account.secret.name`: the name of the secret that will contain
the credentials for the service account with access to the bucket
- bucket.service.account.secret.key: the key in the secret with the required
- `bucket.service.account.secret.key`: the key in the secret with the required
service account json.
- The bucket is recommended to be configured with a retention policy after which
files will be deleted.
Expand Down
139 changes: 53 additions & 86 deletions docs/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,15 @@ For example:

- [Syntax](#syntax)
- [Resource types](#resource-types)
- [Git Resource](#git-resource)
- [Pull Request Resource](#pull-request-resource)
- [Image Resource](#image-resource)
- [Cluster Resource](#cluster-resource)
- [Storage Resource](#storage-resource)
- [GCS Storage Resource](#gcs-storage-resource)
- [BuildGCS Storage Resource](#buildgcs-storage-resource)
- [Volume Resource](#volume-resource)
- [Cloud Event Resource](#cloud-event-resource)
- [Using Resources](#using-resources)

## Syntax
Expand Down Expand Up @@ -119,94 +128,8 @@ spec:
value: /workspace/go
```
### Overriding where resources are copied from
When specifying input and output `PipelineResources`, you can optionally specify
`paths` for each resource. `paths` will be used by `TaskRun` as the resource's
new source paths i.e., copy the resource from specified list of paths. `TaskRun`
expects the folder and contents to be already present in specified paths.
`paths` feature could be used to provide extra files or altered version of
existing resource before execution of steps.

Output resource includes name and reference to pipeline resource and optionally
`paths`. `paths` will be used by `TaskRun` as the resource's new destination
paths i.e., copy the resource entirely to specified paths. `TaskRun` will be
responsible for creating required directories and copying contents over. `paths`
feature could be used to inspect the results of taskrun after execution of
steps.

`paths` feature for input and output resource is heavily used to pass same
version of resources across tasks in context of pipelinerun.

In the following example, task and taskrun are defined with input resource,
output resource and step which builds war artifact. After execution of
taskrun(`volume-taskrun`), `custom` volume will have entire resource
`java-git-resource` (including the war artifact) copied to the destination path
`/custom/workspace/`.

```yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: volume-task
namespace: default
spec:
inputs:
resources:
- name: workspace
type: git
outputs:
resources:
- name: workspace
steps:
- name: build-war
image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/
command: jar
args: ["-cvf", "projectname.war", "*"]
volumeMounts:
- name: custom-volume
mountPath: /custom
```

```yaml
apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
name: volume-taskrun
namespace: default
spec:
taskRef:
name: volume-task
inputs:
resources:
- name: workspace
resourceRef:
name: java-git-resource
outputs:
resources:
- name: workspace
paths:
- /custom/workspace/
resourceRef:
name: java-git-resource
volumes:
- name: custom-volume
emptyDir: {}
```

## Resource Types
The following `PipelineResources` are currently supported:

- [Git Resource](#git-resource)
- [Pull Request Resource](#pull-request-resource)
- [Image Resource](#image-resource)
- [Cluster Resource](#cluster-resource)
- [Storage Resource](#storage-resource)
- [GCS Storage Resource](#gcs-storage-resource)
- [BuildGCS Storage Resource](#buildgcs-storage-resource)
- [Cloud Event Resource](#cloud-event-resource)

### Git Resource
Git resource represents a [git](https://git-scm.com/) repository, that contains
Expand Down Expand Up @@ -770,6 +693,50 @@ the container image
[gcr.io/cloud-builders//gcs-fetcher](https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gcs-fetcher)
does not support configuring secrets.

#### Volume Resource

The Volume `PipelineResource` will create and manage an underlying
[Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVC).

To create a Volume resource:

```yaml
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-1
spec:
type: storage
params:
- name: type
value: volume
- name: size
value: 5Gi
- name: subPath
value: some/path/on/the/pvc
- name: storageClassName
value: regional-disk
```

Supported `params` are:

* `size` - **Required** The size to make the underlying PVC, expressed as a
[Quantity](https://godoc.org/k8s.io/apimachinery/pkg/api/resource#Quantity)
* `subPath` - By default, data will be placed at the root of the PVC. This allows data to
instead be placed in a subfolder on the PVC
* `storageClassName` - The [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/)
that the PVC should use. For example, this is how you can use multiple Volume PipelineResources
[with GKE regional clusters](#using-with-gke-regional-clusters).

##### Using with GKE Regional Clusters

When using GKE regional clusters, when PVCs are created they will be assigned to zones
round robin. This means if two Volume PipelineResources are used by one Task, you must specify a
[`regional-pd`](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd) storage class, otherwise the PVCs could be created in different zones, and it will
be impossible to schedule a Task's pod that can use both.

[See the volume PipelineResource example.](../examples/pipelineruns/volume-output-pipelinerun.yaml)

### Cloud Event Resource

The Cloud Event Resource represents a [cloud event](https://github.com/cloudevents/spec)
Expand Down
183 changes: 183 additions & 0 deletions examples/pipelineruns/volume-output-pipelinerun.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
# This example will be using multiple PVCs and will be run against a regional GKE.
# This means we have to make sure that the PVCs aren't created in different zones,
# and the only way to do this is to create regional PVCs.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: regional-disk
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
replication-type: regional-pd
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-1
spec:
type: storage
params:
- name: type
value: volume
- name: storageClassName
value: regional-disk
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: volume-resource-2
spec:
type: storage
params:
- name: type
value: volume
- name: path
value: special-folder
- name: storageClassName
value: regional-disk
---
# Task writes data to a predefined path
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: create-files
spec:
outputs:
# This Task uses two volume outputs to ensure that multiple volume
# outputs can be used
resources:
- name: volume1
type: storage
- name: volume2
type: storage
steps:
- name: write-new-stuff-1
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff1 > $(outputs.resources.volume1.path)/stuff1']
- name: write-new-stuff-2
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff2 > $(outputs.resources.volume2.path)/stuff2']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: files-exist-and-add-new
spec:
inputs:
resources:
- name: volume1
type: storage
targetPath: newpath
- name: volume2
type: storage
outputs:
resources:
- name: volume1
type: storage
steps:
- name: read1
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read2
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff2 == $(cat $(inputs.resources.volume2.path)/stuff2) ]]'
- name: write-new-stuff-3
image: ubuntu
command: ['bash']
args: ['-c', 'echo stuff3 > $(outputs.resources.volume1.path)/stuff3']
---
# Reads a file from a predefined path and writes as well
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: files-exist
spec:
inputs:
resources:
- name: volume1
type: storage
steps:
- name: read1
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff1 == $(cat $(inputs.resources.volume1.path)/stuff1) ]]'
- name: read3
image: ubuntu
command: ["/bin/bash"]
args:
- '-c'
- '[[ stuff3 == $(cat $(inputs.resources.volume1.path)/stuff3) ]]'
---
# First task writees files to two volumes. The next task ensures these files exist
# then writes a third file to the first volume. The last Task ensures both expected
# files exist on this volume.
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: volume-output-pipeline
spec:
resources:
- name: volume1
type: storage
- name: volume2
type: storage
tasks:
- name: first-create-files
taskRef:
name: create-files
resources:
outputs:
- name: volume1
resource: volume1
- name: volume2
resource: volume2
- name: then-check-and-write
taskRef:
name: files-exist-and-add-new
resources:
inputs:
- name: volume1
resource: volume1
from: [first-create-files]
- name: volume2
resource: volume2
from: [first-create-files]
outputs:
- name: volume1
# This Task uses the same volume as an input and an output to ensure this works
resource: volume1
- name: then-check
taskRef:
name: files-exist
resources:
inputs:
- name: volume1
resource: volume1
from: [then-check-and-write]
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: volume-output-pipeline-run
spec:
pipelineRef:
name: volume-output-pipeline
serviceAccount: 'default'
resources:
- name: volume1
resourceRef:
name: volume-resource-1
- name: volume2
resourceRef:
name: volume-resource-2
17 changes: 10 additions & 7 deletions pkg/apis/pipeline/v1alpha1/artifact_pvc.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ func (p *ArtifactPVC) GetCopyFromStorageToSteps(name, sourcePath, destinationPat
}}}
}

// GetCopyToStorageFromSteps returns a container used to upload artifacts for temporary storage
// GetCopyToStorageFromSteps returns a container used to upload artifacts for temporary storageCreateDirStep
func (p *ArtifactPVC) GetCopyToStorageFromSteps(name, sourcePath, destinationPath string) []Step {
return []Step{{Container: corev1.Container{
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("source-mkdir-%s", name)),
Expand Down Expand Up @@ -86,13 +86,16 @@ func GetPvcMount(name string) corev1.VolumeMount {
}
}

// CreateDirStep returns a container step to create a dir
func CreateDirStep(bashNoopImage string, name, destinationPath string) Step {
// CreateDirStep returns a container step to create a dir at destinationPath. The name
// of the step will include name. Optionally will mount included volumeMounts if the
// dir is to be created on the volume.
func CreateDirStep(bashNoopImage string, name, destinationPath string, volumeMounts []corev1.VolumeMount) Step {
return Step{Container: corev1.Container{
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("create-dir-%s", strings.ToLower(name))),
Image: bashNoopImage,
Command: []string{"/ko-app/bash"},
Args: []string{"-args", strings.Join([]string{"mkdir", "-p", destinationPath}, " ")},
Name: names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("create-dir-%s", strings.ToLower(name))),
Image: bashNoopImage,
Command: []string{"/ko-app/bash"},
Args: []string{"-args", strings.Join([]string{"mkdir", "-p", destinationPath}, " ")},
VolumeMounts: volumeMounts,
}}
}

Expand Down
Loading

0 comments on commit 18bd7ff

Please sign in to comment.