-
Notifications
You must be signed in to change notification settings - Fork 2
Workspace Provisioning concepts
A User Workspace provides a service to users through which they can organize data/processing-services that are of current interest to them, they are currently working on, and to organize results of processing executed, Research Objects, etc.
A workspace is represented in the following way:
- Status: provisioning, ready, deprovisioning
- Storage
- Url (S3 compliant)
- Access Credentials
- Quota (size in MB)
- Component Endpoint Urls: Ingestor, Data Access, Catalogue
Note: The workspace storage (i.e. the individual user bucket) can only be accessed via the access credentials or by "cloud platform identities" explicitly whitelisted (e.g. the ADES component) based on bucket policies.
The individual EOEPCA building blocks enable different deployment scenarios supporting different conceptual models:
To support such flexibility neither the Resource Management nor the Processing & Chaining (with its ADES component) building block make assumptions on user ↔ workspace assignments and keep this out of their domain by offering an API to provision and manage workspaces.
EOEPCA building blocks also strive to be applicable to different deployment scenarios and cloud provider capabilities by defining a common overall workflows and only abstracting the cloud platform specifics, i.e. the instantiation of the concretely available object storage solution in the case of the workflow provisioning:
-
user specific data access and cataloguing components are provisioned in a unified way using dynamic (on demand) Kubernetes Deployments for each individual user
-
the provisioning of user storage (i.e. an S3 object storage bucket) is modelled as a long running (asynchronous) process:
-
state the need for the creation of this cloud resource via the workspace API - this is achieved in a generic way using a special Kubernetes Custom Resource Definition (CRD) representing such a claim (see Bucket claim)
-
implement the bucket creation by sending a post request to the
bucket operator wrapper
endpoint where bucket creation will either a. be accepted and is being processed in an asynchronous manner. In this case there will be waiting for fulfillment of this need, i.e. the availability of user storage and access credentials exposed on the workspace API - this is achieved by watching for the creation of a Kubernetes Secret providing these details. or b. the bucket will be created accordingly and the bucket's access credentials will be sent in the response body.
-
-
the concrete implementation of the fulfillment logic is fully pluggable - an example implementation demonstrating a fully automated solution for CreoDias/Openstack is provided as well (see bucket-operator)
note: it is also possible to not automate the fulfillment and just manually
-
check the Kubernetes cluster for a Bucket Claim
-
create object storage via platform tooling (i.e. use a CLI, Terraform,... to create a bucket for a user)
-
create a Kubernetes Secret with access details which can be picked up by the workspace APU (see step 2 above)
The fulfillment process during workspace provisioning on top of a specific cloud platform has the following responsibilities:
- create a S3 compliant object storage bucket on the underlying cloud platform for a specific user with restricted access
- create accesskey and accesssecret to be used by the user and his components to access the bucket
- set a bucket policy (whitelisting) so that other EOEPCA components (like the ADES for stage-out) can access the individual user bucket
Please refer to the README of the bucket-operator example implementation to see how these steps are achieved on CreoDias/Openstack cloud platform.