Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC to introduce Cloud Native Buildpacks lifecycle #796

Merged
merged 7 commits into from
Apr 30, 2024
Merged
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 177 additions & 0 deletions toc/rfc/rfc-draft-cnb-lifecycle.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# Meta

[meta]: #meta
- Name: Cloud Native Buildpacks Lifecycle
- Start Date: 2024-03-19
- Author(s): @c0d1ngm0nk3y, @pbusko, @nicolasbender, @modulo11
- Status: Draft <!-- Acceptable values: Draft, Approved, On Hold, Superseded -->
- RFC Pull Request: https://github.com/cloudfoundry/community/pull/796
- Updates: [RFC 0017](https://github.com/cloudfoundry/community/blob/main/toc/rfc/rfc-0017-add-cnbs.md)

## Summary

[Cloud Native Buildpacks (CNBs)](https://buildpacks.io/), also known as v3 buildpacks, are the current generation of buildpacks and offer some improvements over the v2 buildpacks that CF Deployment currently uses. The Cloud Foundry Foundation already has an implementation of Cloud Native Buildpacks via the [Paketo](https://paketo.io/) project, however these CNBs can't currently be used in CF.

This RFC introduces a new optional lifecycle to Cloud Foundry which enables users to build their applications using Cloud Native Buildpacks.

## Problem

The v2 buildpacks are effectively in maintenance mode and do not receive substantial new features. By not integrating with v3 buildpacks, Cloud Foundry is missing out on new buildpacks (e.g. Java Native and Web Servers CNBs) as well as any new features that are added to the still-actively-developed v3 buildpacks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The folks on our side are doing development on both V2 and CNBs in parallel, I'm seeing new capabilities come online for V2 frequently as well. What's giving you folks this impression? This will be significant work in quite a few areas to support CNBs in CF and I want to make it's worth it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've taken this pretty literally from RFC 0017.

Regarding the amount of work...

This will be significant work in quite a few areas to support CNBs in CF and I want to make it's worth it.

... our plan was not only to propose this RFC, but also to work on contributing the implementation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsboulder would you want to propose a different wording or are we good to resolve this question?


## Proposal

- Introduce a new [lifecycle type](https://v3-apidocs.cloudfoundry.org/index.html#lifecycles) `cnb` and its lifecycle data
- Introduce a new [app lifecycle](https://github.com/cloudfoundry/diego-design-notes#app-lifecycles) called `cnbapplifecycle` which interacts with the [CNB Lifecycle](https://github.com/buildpacks/lifecycle)
- Reuse [cflinuxfs4](https://github.com/cloudfoundry/cflinuxfs4) as the base for staging and running apps
- Introduce a new flag to CLI and the [app manifest](https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html) to be able to use Cloud Native Buildpacks instead of v2 buildpacks

### Architecture

This will require changes in the following releases:

- CF CLI
- Cloud Controller
- Diego

No changes to how Diego runs workloads are necessary to implement this RFC.

- The cli will forward a new app lifecycle type to the cloud controller.
- The cloud controller will use a new `cnbapplifecycle` for the application.

Affected cloud controller APIs (all that interact with [lifecycles](https://v3-apidocs.cloudfoundry.org/index.html#lifecycles)):

- [apps](https://v3-apidocs.cloudfoundry.org/index.html#apps)
- [builds](https://v3-apidocs.cloudfoundry.org/index.html#builds)
- [droplets](https://v3-apidocs.cloudfoundry.org/index.html#droplets)
- [manifests](https://v3-apidocs.cloudfoundry.org/index.html#manifests)

### Goals

- Establish the latest generation of buildpacks in CF as first-class citizen
- Increase cohesion and app portability between CF Deployment and [Korifi](https://www.cloudfoundry.org/technology/korifi/), via mutual Paketo integration
- Increase adoption of Cloud Native Buildpacks
- Open the door for eventual deprecation of the v2 buildpacks, reducing maintenance costs (v2 buildpack deprecation is NOT included in this RFC)
- No fundamental changes to the architecture of CF
- Result of the staging process will be a droplet

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CNB lifecycle supports exporting to OCI layout on disk (the resulting image can even be "sparse" in that run image layers will be missing). This should facilitate exporting to a CF droplet without having to re-implement large parts of the exporter.

- No OCI registry necessary
- Reuse cflinuxfs4 as rootfs during build and run
modulo11 marked this conversation as resolved.
Show resolved Hide resolved
- No change to how the Cloud Foundry platform provides service binding information
modulo11 marked this conversation as resolved.
Show resolved Hide resolved

### High Level Implementation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally a heading would be added discussion what to do with service bindings. I know some prior work was done in: buildpacks/libcnb#228, but is this library used by all Paketo buildpacks? and what about non Paketo cnbs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I never saw a great solution to the VCAP services => CN bindings problem? Is there a way to make this work that isn't a ton of brittle translation tables?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The #804 tries to address this point. I would be interested in your feedback.

FYI @dmikusa as we discussed this during the TOC meeting and feedback is welcome.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rkoster @dsboulder @beyhan Should we add a little subsection referring to buildpacks/libcnb#228, the possibility to do the same for packit based Paketo buildpacks and referring to #804 for solving it once and for all?

In short, I would probably write this as

Buildpacks that need credentials from services should be adapted to read from VCAP_SERVICES until #804 removes this requirement.
Note: Buildpacks based on libpak or libcnb already understand credentials from VCAP_SERVICES since libcnb#228 has been merged and released.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally a heading would be added discussion what to do with service bindings.

Should it really? It feels a bit out of place because...

a) This is not necessarily cf related, but rather buildpack related
b) This feels like an implementation detail as already established in the predecessor rfc (here)

What should we mention here? That cf does not have to change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, then can we mention that it is out of scope for this RFC to change anything in how binding information is handled by the CF runtime. That means CNB based apps will get VCAP_SERVICES as env var and apps or buildpacks have to deal with this. Future RFCs like #804 will deal with this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, thanks!

Copy link
Contributor

@rkoster rkoster Apr 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it really? It feels a bit out of place because...

Sure now that we have an RFC addressing the problem of servicebinding spec compatibility there is no need to discuss it in this RFC. A link to the service-binding RFC would be nice so that future readers understand how they relate.


#### CNB App Lifecycle

Introduce a new `lifecycle type` that enables the cloud controller to differentiate between the classical buildpacks and Cloud Native Buildpacks. On a high level, it will be very similar to the existing buildpackapplifecycle (v2 buildpacks). The new app lifecycle acts as a CNB [platform](https://buildpacks.io/docs/for-app-developers/concepts/platform/) and will:

1. Download the app source code from the blobstore
1. Download the CNB app lifecycle from the blobstore
1. Download the configured buildpacks
1. Write an [order.toml](https://github.com/buildpacks/spec/blob/main/platform.md#ordertoml-toml) with configured buildpacks
1. Execute [detect](https://github.com/buildpacks/spec/blob/main/platform.md#detector) and [build](https://github.com/buildpacks/spec/blob/main/platform.md#builder) phases using the [CNB lifecycle](https://github.com/buildpacks/lifecycle)
1. Package the result into a droplet and upload it to the blobstore
1. Write a result.json file with the [Staging Result](https://github.com/cloudfoundry/buildpackapplifecycle/blob/f4b2bc9ff6cc6229402d7c27e887763154cf0378/models.go#L73-L80)

#### CNB Lifecycle Type

Introduce a new type of lifecycle type which indicates that Cloud Native Buildpacks should be used. In future, this can be enhanced with additional CNB inputs. For this RFC we’d start with:

```json
{
"type": "cnb",
"data": {
"buildpacks": ["docker://gcr.io/paketo-buildpacks/java"],
"stack": "cflinuxfs4"
}
}
```

Both, building and running an app will be based on the configured stack. If no stack is provided, the platform default is used. An empty (or not provided) list of buildpacks will lead to an error. This essentially means, that no auto-detection is supported at the moment. Once [system CNBs](#system-buildpacks) are supported, this behavior will change.

#### CF CLI

New flag `–-lifecycle [buildpack|docker|cnb]` will be introduced to the `cf push` command.

#### App Manifest

New property `lifecycle: buildpack|docker|cnb` will be added to the App manifest. It will default to `lifecycle: buildpack`. Using `docker-*` properties implies `lifecycle: docker`.
The buildpack URL must start with one of the following schemas: `docker://`, `http://` or `https://`.

```yaml
---
applications:
- name: test-app
instances: 1
lifecycle: cnb
buildpacks:
- docker://gcr.io/paketo-buildpacks/java
```

Both changes (CLI and manifest) were chosen because they are simple (from a user perspective), easy to implement and remove, if CNBs will become the standard lifecycle in future.

### Alternative APIs

- Instead of a lifecycle type switch, introduce a `buildpack-type` (`v2`/`v3` or `cf`/`cnb` or `classic`/`cnb`) to distinguish between different lifecycles.

#### Diego Release

The new lifecycle package `cnb_app_lifecycle` will be added to the Diego BOSH release, next to the existing `buildpack_app_lifecycle` and `docker_app_lifecycle`. This lifecycle package should be served by the File Server in the same way as existing lifecycle packages.

### Possible Future Work

#### Add Paketo Stack(s)

This RFC does not include the addition of a new stack to Cloud Foundry, rather the resulting droplet would run on top of the existing `cflinuxfs4`. This should work for most, if not all apps, as the stack is much bigger (in terms of native packages installed) than the stacks used by Paketo.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mechanism that currently exists rebases the base layer(stack) on every application restart onto the droplet.
The behaviour CNBs have is to add the base image (stack equivalent) at build time to a resulting OCI Image.
The way this mechanism works is different when building internally or externally with the same CNB.

The way CNBs add the base layer while building can be favorable for the stack depreciation problem where currently its not possible to remove a stack with application still depending on it. When the stack gets added at build time this changes as then missing stacks just prevent restaging. Already build droplets include the stack in the droplet and keep running.
However this also comes with a some downsides:

  • Stack update means restaging all apps(resource costly, complex)
  • The stack is saved multiple times in the blobstore(storage and traffic)

Although using the build workflow that is common for CNBs in CF would help plus make the systems behave similar it comes with some costs im not sure if they are worth it yet but maybe something to consider.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say that the thought of using CNBs to produce full OCI images is independent from this "potential future work". Also for a droplet based approach, the smaller Paketo base images in Static, Tiny and Base are an improvement for application security imho.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FloThinksPi I would say you are calling for a separate RFC that's largely independent from the Cloud Native Buildpack topic. Are we good to resolve this question?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not to de-rail - but is this a point worth considering further:

The stack is saved multiple times in the blobstore(storage and traffic)
This is a core benefit of an OCI registry that it should see the layers and thus save a tremendous amount of space. Re-using the blobstore certainly is simplest, but there are cons on the other end to my eye.

Maybe that concern belongs in the follow on RFC?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if moving CF integrated CNBs from creating droplets to creating container images is something we want. If we don't, then there is no problem with a stack saved multiple times, because the roots is saved separately from the droplets.


Paketo provides multiple stacks that are compatible with the Paketo buildpacks. There is an opportunity to adopt some or all of the Paketo stacks into CF Deployment. The Paketo buildpacks have greater cohesion with the Paketo stacks than with `cflinuxfs4`, and the Paketo "base" and "tiny" stacks could offer security-conscious CF Deployment users stacks with far fewer native packages than are currently included with `cflinuxfs4`.

This RFC does not cover the adoption of additional stacks into CF Deployment, but it does open the door to add these stacks in the future.

#### System Buildpacks

This RFC enables only the use of custom buildpacks. CNBs could be added as system buildpacks later to support some auto detection as for the existing v2 buildpacks.

#### Better SBoM Support

This RFC already introduces some SBoM capabilities offered by CNBs. Yet, it is not complete (runtime OS information is missing) and buried in the layers of the droplet. This could be further improved in future.

### Open Questions

#### Pulling Buildpacks from private registries

Cloud Foundry currently supports only a single set of credentials together with a single docker image being passed as part of a `cf push --docker`. However, with Cloud Native Buildpacks multiple images, from multiple registries, using different credentials is possible. Deducting the registry from the passed Cloud Native Buildpacks is not possible e.g. when they are consumed from unauthenticated (e.g. DockerHub) and authenticated registries.

Options:

- Require environment variable with docker config content.

```json
{
"auths": {
"registry.io": {
"auth": "dXNlcjpwYXNzd29yZA=="
}
}
}
```

- Require environment variable pointing to docker config file. CF CLI must parse the file and invoke helpers if needed for required registries.
- Require custom credentials configuration e.g.

```bash
CNB_REGISTRY_CREDS='{"registry.io":{"user":"password"}}' cf push ...
```

```json
{
"type": "cnb",
"data": {
"buildpacks": ["docker://gcr.io/paketo-buildpacks/java"],
"stack": "cflinuxfs4",
"credentials": {
"registry.io": {
"user": "password"
}
}
}
}
```