Skip to content
This repository has been archived by the owner on Aug 21, 2024. It is now read-only.

Commit

Permalink
Update documentation (#40)
Browse files Browse the repository at this point in the history
* Update docs
  • Loading branch information
adamwalach authored Jul 30, 2019
1 parent 367e7b1 commit ac17ddc
Show file tree
Hide file tree
Showing 8 changed files with 57 additions and 202 deletions.
8 changes: 8 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@ This page is an index for the articles in here. We recommend you start by readin

Afterward, see the topics below.

**Note:** Between versions 0.2.0 and 0.3.0, Service Catalog changed its internal storage mechanism.
Versions 0.2.0 and older used its own API Server and etcd.

Starting from version 0.3.0, Service Catalogs moved to a solution based on Custom Resource Definitions, which is a native K8S feature.

The API Server implementation will be supported by providing bug fixes for the next 9 months.
If you still use Service Catalog version 0.2.0, read the [migration guide](./migration-apiserver-to-crds.md).

## Topics for users:

- [Installation instructions](install.md)
Expand Down
165 changes: 21 additions & 144 deletions docs/auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,11 @@ title: Serving Certificates, Authentication, and Authorization
layout: docwithnav
---

This document outlines how the service catalog API server handles
authentication and authorization.
This document outlines how the service catalog handles authentication.

The service catalog Helm chart's defaults paired with most Kubernetes
distributions will automatically set up all authentication and authorization
details correctly. This documentation, therefore, exists for the benefit of
distributions will automatically set up all authentication details correctly.
This documentation, therefore, exists for the benefit of
those who wish to develop and advanced understanding of this topic and those
who have a need to address various outlying scenarios.

Expand All @@ -27,58 +26,35 @@ all. Generally, our client CA certificates will be self-signed, since
they represent the "root" of our trust relationship: clients must
inherently trust the CA.

In a full setup of the service catalog API server, there are three
*different* CAs (and these really should be different):

1. a serving CA: this CA signs "serving" certificates, which are used to
encrypt communication over HTTPS. The same CA used to sign the main
Kubernetes API server serving certificate pair may also be used to sign
the service catalog serving certificates, but a different CA
may also be used.

The service catalog Helm chart automatically generates this CA. There is
generally no need to override this.

2. a client CA: this CA signs client certificates, and is used by the main
Kubernetes API server to authenticate users based on the client certificates
they submit. The same client CA may also be used for the service catalog
API server, but a different CA may also be used. Using the same CA
ensures that identity trust works consistently for both the main Kubernetes
API server and the service catalog API server.

As an example, the default cluster admin user generated in many
Kubernetes distributions uses client certificate authentication.
Additionally, controllers or non-human clients running outside the cluster
often use certificate-based authentication.

3. a RequestHeader client CA: this special CA signs proxy client
certificates. Clients presenting these certificates are effectively
trusted to masquerade as any other identity. When running behind the
API aggregator, this *must* be the same CA used to sign the
aggregator's proxy client certificate.

On many Kubernetes distributions, this CA is provided through a flag on the
main Kubernetes API server. The main Kubernetes API server inserts the CA
certificate into a config map and the API server for all aggregated APIs
will, by default, inherit the CA from that config map. For Kubernetes
distributions that do not handle this automatically, the RequestHeader client
CA can be set manually on the service catalog API server.
The service catalog Helm chart automatically generates new CA.
This CA signs "serving" certificates, which are used to encrypt communication
over HTTPS.There is generally no need to override this.

### Generating certificates

In the common case, all three CA certificates referenced above already
exist as part of the main Kubernetes cluster setup.
In the common case CA certificate referenced above already
exist as part of the installation.

In case you need to generate any of the CA certificate pairs mentioned
above yourself, the Kubernetes documentation has [detailed
instructions](https://kubernetes.io/docs/admin/authentication/#creating-certificates)
on how to create certificates several different ways.

Service Catalog Helm chart uses build-in [Sprig’s](https://github.com/Masterminds/sprig) functions to generate
all needed certificates used by Webhook Server:
```
{{- $ca := genCA "service-catalog-webhook-ca" 3650 }}
{{- $cn := printf "%s-webhook" (include "fullname" .) }}
{{- $altName1 := printf "%s.%s" $cn .Release.Namespace }}
{{- $altName2 := printf "%s.%s.svc" $cn .Release.Namespace }}
{{- $cert := genSignedCert $cn nil (list $altName1 $altName2) 3650 $ca }}
```

## Authentication

The service catalog API server makes use of the standard Kubernetes add-on
API server delegated authentication setup. There are several components
of the delegated authentication setup, detailed below.
CRDs always use the same authentication and authorization as the built-in resources of your API Server.
If you use RBAC for authorization, most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules).
You’ll need to explicitly grant access to the new resources.

### Client Certificate Authentication

Expand All @@ -91,106 +67,7 @@ Generally, the default admin user in a cluster connects with client
certificate authentication. Additionally, off-cluster non-human clients
often use client certificate authentication.

By default, a main Kubernetes API server configured with the
`--client-ca-file` option automatically creates a ConfigMap called
`extension-apiserver-authentication` in the `kube-system` namespace,
populated with the client CA file. The service catalog API server use
this CA certificate as the CA used to verify client authentication. This
way, client certificate users who can authenticate with the main
Kubernetes system can also authenticate with the service catalog API
server.

See the [delegated token authentication](#delegated-token-authentication)
section for more information about how the service catalog API server
contacts the main Kubernetes API server to access this ConfigMap.

If you wish to use a different client CA certificate to verify client
certificate authentication, you can manually pass the `--client-ca-file`
option to the service catalog API server.

See the [x509 client
certificates](https://kubernetes.io/docs/admin/authentication/#x509-client-certs)
section of the Kubernetes documentation for more information.

### Delegated Token Authentication

Delegated token authentication authenticates clients who pass in a token
using the `Authorization: Bearer $TOKEN` HTTP header. This is the common
authentication method used by most human Kubernetes clients, as well as
in-cluster non-human clients.

In this case, the service catalog API server extracts the token from the
HTTP request, and verifies it against another API server. In common cases,
this is the main Kubernetes API server. This allows users who are can
authentication with the main Kubernetes system to also authenticate with
the service catalog API server.

By default, the service catalog API server searches for the connection
information and credentials that are automatically injected into every pod
running on a Kubernetes cluster.

If you do not wish to have the service catalog API server authenticate
against the same cluster that it is running on, or if it is running
outside of a cluster, you can pass the `--authentication-kubeconfig`
option to the serice catalog API server to specify a different Kubeconfig
file to use to connect.

The [Webhook token
authentication](https://kubernetes.io/docs/admin/authentication/#webhook-token-authentication)
method described in the Kubernetes authentication documentation works
similarly in principal to delegated token authentication, except that we
use an existing Kubernetes cluster instead of an external webhook.

### RequestHeader Authentication

RequestHeader authentication authenticates connections from API server
proxies, which themselves have already authenticated the client. It works
similarly to [client certificate
authentication](#client-certificate-authentication): it validates the
certificate of the proxy using a CA certificate. However, it then allows
the proxy to masquerade as any other user, by reading a series of headers
set by the proxy. This allows the service catalog API server to run behind
the API server aggregator.

By default, the service catalog API server attempts to pull the
requestheader client CA certificate and appropriate header names from the
`extension-apiserver-authentication` ConfigMap mentioned above in
[client-certificate-authentication](#client-certificate-authentication).
The main Kubernetes API server populates this if it was configured with
the `--requestheader-client-ca-file` option (and optionally associated
`--requestheader-` options).

However, some API servers are not configured with the
`--requestheader-client-ca-file` option. In these cases, you must pass
the `--requestheader-client-ca-file` option to the service catalog API
server. Any API server proxies need to have client certificates signed by
this CA certificate in order to properly pass their authentication
information through to the service catalog API server.

Alternatively, you can pass the `--authentication-skip-lookup` flag to the
service catalog API server. However, this will *also* disable client
certificate authentication unless you manually pass the corresponding
`--client-ca-file` flag.

In addition to the CA certificate, you can also configure a number of
additional options. See the [authenticating
proxy](https://kubernetes.io/docs/admin/authentication/#authenticating-proxy)
section of the Kubernetes documentation for more information.

### Authorization

The service catalog API server uses delegated authorization. This means
that it queries for authorization against the main Kubernetes API server.
This means that you can store our policy in the same place as the policy
used for the main Kubernetes API server, and in the same format (e.g.
Kubernetes RBAC).

By default, the service catalog API server searches for the connection
information and credentials that are automatically injected into every pod
running on a Kubernetes cluster.

If you do not wish to have the service catalog API server authenticate
against the same cluster that it is running on, or if it is running
outside of a cluster, you can pass the `--authorization-kubeconfig` option
to the serice catalog API server to specify a different Kubeconfig file to
use to connect.
5 changes: 1 addition & 4 deletions docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,7 @@ resources. svcat is a domain-specific tool to make interacting with the Service
While many of its commands have analogs to `kubectl`, our goal is to streamline and optimize
the operator experience.

svcat communicates with the Service Catalog API through the [aggregated API][agg-api] endpoint on a
Kubernetes cluster.

[agg-api]: https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/
svcat communicates with Kubernetes cluster by directly using REST API - just like kubectl.

This document assumes that you've installed Service Catalog and the Service Catalog CLI
onto your cluster. If you haven't, please see the [installation instructions](install.md#installing-the-service-catalog-cli).
Expand Down
32 changes: 11 additions & 21 deletions docs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,40 +97,30 @@ the basic concepts of the OSB API.
![Service Catalog Design](images/sc-design.svg)

The above is the high level architecture of Service Catalog.
Service Catalog has two basic building blocks: a Webhook Server and a controller.

Service Catalog has two basic building blocks: an API server and a controller.
This design is similar to the design of Kubernetes itself (in fact,
Service Catalog borrows a lot of code from Kubernetes to implement this
design).

### API Server
### Webhook Server

The API Server is an HTTPS
[REST](https://en.wikipedia.org/wiki/Representational_state_transfer)ful
front-end for [etcd](https://coreos.com/etcd/)
(we can implement more storage backends, but we haven't done so)
The Webhook Server uses [Admission Webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
to manage custom resources. Admission Webhook is a feature available in the Kubernetes API Server,
that allows you to implement arbitrary control decisions, such as validation or mutation.

Users and the Service Catalog controller interact with the API server
(via the
[Kubernetes API aggregator](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/)
to perform [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations on
Service Catalog resources (the ones listed in the previous section). For example, when a user runs
`kubectl get clusterservicebroker`, they will be talking to the Service
Catalog API server to get the list of `ClusterServiceBroker` resources.

The current version of all Service Catalog API resources is `v1beta1`.
You can see the structure of each resource in detail at
For every resource managed by Service Catalog (the ones listed in the previous section), there is a separate handler defined. You can see the structure at
[`pkg/webhook/servicecatalog`](https://github.com/kyma-incubator/service-catalog/tree/crds/pkg/webhook/servicecatalog).
The current version of all Service Catalog API resources is `v1beta1`. The resources are defined here:
[`pkg/apis/servicecatalog/v1beta1/types.go`](https://github.com/kubernetes-sigs/service-catalog/blob/master/pkg/apis/servicecatalog/v1beta1/types.go).

If you want to learn more about Admission Webhooks, read [this](https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) document.

### Controller

The Service Catalog controller implements the behaviors of the service-catalog
API. It monitors the API resources (by watching the stream of events from the
API server) and takes the appropriate actions to reconcile the current
state with the user's desired end state.

For example, if a user creates a `ClusterServiceBroker`, the Service Catalog
API server will store the resource and emit an event. The Service Catalog
For example, if a user creates a `ClusterServiceBroker`, the Service Catalog
controller will pick up the event and request the catalog from the
broker listed in the resource.

Expand Down
22 changes: 6 additions & 16 deletions docs/devguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,10 +124,10 @@ layout:
│   └── catalog # Helm chart for deploying the service catalog
│   └── ups-broker # Helm chart for deploying the user-provided service broker
├── cmd # Contains "main" Go packages for each service catalog component binary
│   └── apiserver # The service catalog API server service-catalog command
│   └── controller-manager # The service catalog controller manager service-catalog command
│   └── controller-manager # The service catalog controller manager command
│   └── service-catalog # The service catalog binary, which is used to run commands
│ └── svcat # The command-line interface for interacting with kubernetes service-catalog resources
│ └── webhook # The service catalog webhook server command
├── contrib # Contains examples, non-essential golang source, CI configurations, etc
│   └── build # Dockerfiles for contrib images (example: ups-broker)
│   └── cmd # Entrypoints for contrib binaries
Expand Down Expand Up @@ -334,7 +334,8 @@ functionality or introduce instability. See [FeatureGates](feature-gates.md)
for more details.

When adding a FeatureGate to Helm charts, define the variable
`fooEnabled` with a value of `false` in [values.yaml](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/values.yaml). In the [API Server](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/apiserver-deployment.yaml) and [Controller](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/controller-manager-deployment.yaml)
`fooEnabled` with the `false` value in [values.yaml](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/values.yaml).
In the [Webhook Server](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/webhook-deployment.yaml) and [Controller](https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/controller-manager-deployment.yaml)
templates, add the new FeatureGate:
{% raw %}
```yaml
Expand Down Expand Up @@ -449,9 +450,7 @@ If someone wants to install a unreleased version, they must build it locally.

Use the [`catalog` chart](../charts/catalog) to deploy the service
catalog into your cluster. The easiest way to get started is to deploy into a
cluster you regularly use and are familiar with. One of the choices you can
make when deploying the catalog is whether to make the API server store its
resources in an external etcd server, or in third party resources.
cluster you regularly use and are familiar with.

If you have recently merged changes that haven't yet made it into a
release, you probably want to deploy the canary images. Always use the
Expand All @@ -470,19 +469,10 @@ helm install charts/catalog \
--set image=quay.io/kubernetes-service-catalog/service-catalog:canary
```

If you choose etcd storage, the helm chart will launch an etcd server for you
in the same pod as the service-catalog API server. You will be responsible for
the data in the etcd server container.

If you choose third party resources storage, the helm chart will not launch an
etcd server, but will instead instruct the API server to store all resources in
the Kubernetes cluster as third party resources.

### Deploy local canary
For your convenience, you can use the following script quickly rebuild, push and
deploy the canary image. There are a few assumptions about your environment and
configuration in the script (for example, that you have persistent storage setup
for etcd so that you don't lose data in between pushes). If the assumptions do
configuration in the script. If the assumptions do
not match your needs, we suggest copying the contents of that script and using
it as a starting off point for your own custom deployment script.

Expand Down
Loading

0 comments on commit ac17ddc

Please sign in to comment.