Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[need help] updating changelog for v0.8 #1114

Merged
merged 21 commits into from
Feb 7, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
551 changes: 548 additions & 3 deletions CHANGELOG.md

Large diffs are not rendered by default.

82 changes: 27 additions & 55 deletions doc/source/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,27 +129,30 @@ hub:
# some other code
```

### `hub.extraConfigMap`
### `custom` configuration

This property takes a dictionary of values that are then made available for code
in `hub.extraConfig` to read using a `z2jh.get_config` function. You can use this to
easily separate your code (which goes in `hub.extraConfig`) from your config
(which should go here).
The contents of `values.yaml` is passed through to the Hub image.
You can access these values via the `z2jh.get_config` function,
for further customization of the hub pod.
Version 0.8 of the chart adds a top-level `custom`
field for passing through additional configuration that you may use.
It can be arbitrary YAML.
You can use this to separate your code (which goes in `hub.extraConfig`)
from your config (which should go in `custom`).

For example, if you use the following snippet in your config.yaml file:

```yaml
hub:
extraConfigMap:
myString: Hello!
myList:
- Item1
- Item2
myDict:
key: value
myLongString: |
Line1
Line2
custom:
myString: Hello!
myList:
- Item1
- Item2
myDict:
key: value
myLongString: |
Line1
Line2
```

In your `hub.extraConfig`,
Expand All @@ -166,8 +169,14 @@ In your `hub.extraConfig`,
You need to have a `import z2jh` at the top of your `extraConfig` for
`z2jh.get_config()` to work.

Note that the keys in `hub.extraConfigMap` must be alpha numeric strings
starting with a character. Dashes and Underscores are not allowed.
```eval_rst
.. versionchanged:: 0.8

`hub.extraConfigMap` used to be required for specifying additional values
to pass, which was more restrictive.
`hub.extraConfigMap` is deprecated in favor of the new
top-level `custom` field, which allows fully arbitrary yaml.
```

### `hub.extraEnv`

Expand All @@ -190,43 +199,6 @@ in kubernetes that as a long list of cool use cases. Some example use cases are:
The items in this list must be valid kubernetes
[container specifications](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#container-v1-core).

## Picking a Scheduler Strategy

Kubernetes offers very flexible ways to determine how it distributes pods on
your nodes. The JupyterHub helm chart supports two common configurations, see
below for a brief description of each.

### Spread

* **Behavior**: This spreads user pods across **as many nodes as possible**.
* **Benefits**: A single node going down will not affect too many users. If you do not have explicit memory & cpu
limits, this strategy also allows your users the most efficient use of RAM & CPU.
* **Drawbacks**: This strategy is less efficient when used with autoscaling.

This is the default strategy. To explicitly specify it, use the following in your
`config.yaml`:

```yaml
singleuser:
schedulerStrategy: spread
```

### Pack

* **Behavior**: This packs user pods into **as few nodes as possible**.
* **Benefits**: This reduces your resource utilization, which is useful in conjunction with autoscalers.
* **Drawbacks**: A single node going down might affect more user pods than using
a "spread" strategy (depending on the node).

When you use this strategy, you should specify limits and guarantees for memory
and cpu. This will make your users' experience more predictable.

To explicitly specify this strategy, use the following in your `config.yaml`:

```yaml
singleuser:
schedulerStrategy: pack
```

## Pre-pulling Images for Faster Startup

Expand Down
2 changes: 1 addition & 1 deletion doc/source/amazon/efs_storage.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. _amazon-aws:
.. _amazon-efs:

Setting up EFS storage on AWS
-----------------------------
Expand Down
21 changes: 12 additions & 9 deletions doc/source/amazon/step-zero-aws-eks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
Step Zero: Kubernetes on Amazon Web Services (AWS) with Elastic Container with Kubernetes (EKS)
-----------------------------------------------------------------------------------------------

AWS recently released native support for Kubernetes. Note: This is only available in US West (Oregon) (us-west-2) and
AWS recently released native support for Kubernetes. Note: This is only available in US West (Oregon) (us-west-2) and
US East (N. Virginia) (us-east-1)

This guide uses AWS to set up a cluster. This mirrors the steps found at `"Getting Started with Amazon EKS" <https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html>`_ with some details filled in that are absent
This guide uses AWS to set up a cluster. This mirrors the steps found at `Getting Started with Amazon EKS`_ with some details filled in that are absent

Procedure:

Expand All @@ -19,25 +19,25 @@ Procedure:

(From the user interface, select EKS as the service, then follow the default steps)

2. Create a VPC if you don't already have one.
2. Create a VPC if you don't already have one.

This step has a lot of variability so it is left to the user. However, one deployment can be found at `"Getting Started with Amazon EKS" <https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html>`_, under *Create your Amazon EKS Cluster VPC*
This step has a lot of variability so it is left to the user. However, one deployment can be found at `Getting Started with Amazon EKS`_, under *Create your Amazon EKS Cluster VPC*

3. Create a Security Group for the EKS Control Plane to use

You do not need to set any permissions on this. The steps below will automatically define access control between the EKS Control Planne and the individual nodes
You do not need to set any permissions on this. The steps below will automatically define access control between the EKS Control Plane and the individual nodes

4. Create your EKS cluster (using the user interface)

Use the IAM Role in step 1 and Security Group defined in step 3. The cluster name is going to be used throughout. We'll use ``Z2JHKubernetesCluster`` as an example.

5. Install **kubectl** and **heptio-authenticator-aws**

Refer to `"Getting Started with Amazon EKS" <https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html>`_ on *Configure kubectl for Amazon EKS*
Refer to `Getting Started with Amazon EKS`_ on *Configure kubectl for Amazon EKS*

6. Configure *kubeconfig*
6. Configure *kubeconfig*

Also see `"Getting Started with Amazon EKS" <https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html>`_ *Step 2: Configure kubectl for Amazon EKS*
Also see `Getting Started with Amazon EKS`_ *Step 2: Configure kubectl for Amazon EKS*

From the user interface on AWS you can retrieve the ``endpoint-url``, ``base64-encoded-ca-cert``. ``cluster-name`` is the name given in step 4. If you are using profiles in your AWS configuration, you can uncomment the ``env`` block and specify your profile as ``aws-profile``.::

Expand Down Expand Up @@ -82,7 +82,7 @@ Procedure:

8. Create the nodes using CloudFormation

See `"Getting Started with Amazon EKS" <https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html>`_ *Step 3: Launch and Configure Amazon EKS Worker Nodes*
See `Getting Started with Amazon EKS`_ *Step 3: Launch and Configure Amazon EKS Worker Nodes*

**Warning** if you are endeavoring to deploy on a private network, the cloudformation template creates a public IP for each worker node though there is no route to get there if you specified only private subnets. Regardless, if you wish to correct this, you can edit the cloudformation template by changing ``Resources.NodeLaunchConfig.Properties.AssociatePublicIpAddress`` from ``'true'`` to ``'false'``

Expand Down Expand Up @@ -133,3 +133,6 @@ Then run

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

.. References

.. _Getting Started with Amazon EKS: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
6 changes: 3 additions & 3 deletions doc/source/create-k8s-cluster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,8 @@
Setup a Kubernetes Cluster
==========================

Kubernetes' documentation describes the many `ways to set up a cluster
<https://kubernetes.io/docs/setup/pick-right-solution/>`__. We attempt to
provide quick instructions for the most painless and popular ways of setting up
Kubernetes' documentation describes the many `ways to set up a cluster`_.
We attempt to provide quick instructions for the most painless and popular ways of setting up
a Kubernetes cluster on various cloud providers and on other infrastructure.

Choose one option and proceed.
Expand All @@ -16,6 +15,7 @@ Choose one option and proceed.
google/step-zero-gcp
microsoft/step-zero-azure
amazon/step-zero-aws
amazon/step-zero-aws-eks
redhat/step-zero-openshift
ibm/step-zero-ibm

Expand Down
2 changes: 1 addition & 1 deletion doc/source/extending-jupyterhub.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The general method to modify your Kubernetes deployment is to:
RELEASE=jhub

helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=0.7.0 \
--version=0.8.0b1 \
--values config.yaml

Note that ``helm list`` should display ``<YOUR_RELEASE_NAME>`` if you forgot it.
Expand Down
24 changes: 7 additions & 17 deletions doc/source/getting-started.rst
Original file line number Diff line number Diff line change
@@ -1,22 +1,12 @@
.. _getting-started:

Overview
========
Moved
=====

At this point, you should have completed *Step Zero* and have an operational
Kubernetes cluster available. If not, see :ref:`create-k8s-cluster`.
This documentation has been reorganized. Start at :doc:`index`.

From now on, we will almost exclusively control the cloud through Kubernetes
rather then something that is specific to the cloud provider. What you learn
from now on is therefore also useful with other cloud providers.
.. raw:: html

The next step is to setup Helm. Helm will allow us to install a package of
things on the cloud. This is relevant to us as there are several parts alongside
the JupyterHub itself to allow it to run on the cloud relating to storage,
network and security.

After setting up Helm, we will use it to install JupyterHub and associated
infrastructure. After this has been done, you can spend time configuring your
deployment of JupyterHub to suit your needs.

Let's get started by moving on to :ref:`setup-helm`.
<script type="text/javascript">
window.location.href = "./index.html"
</script>
9 changes: 8 additions & 1 deletion doc/source/glossary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ details.
`config.yaml`
The :term:`Helm charts <helm chart>` templates are rendered with these
:term:`Helm values` as input. The file is written in the `YAML
<https://en.wikipedia.org/wiki/YAML>`_ format. The YAML format is esential
<https://en.wikipedia.org/wiki/YAML>`_ format. The YAML format is essential
to grasp if working with Kubernetes and Helm.

container
Expand All @@ -42,6 +42,13 @@ details.
A Docker image, built from a :term:`Dockerfile`, allows tools like
``docker`` to create any number of :term:`containers <container>`.

image registry
A service for storing Docker images so that they can be stored
and used later.
The default public registry is at https://hub.docker.com,
but you can also run your own private image registry.
Many cloud providers offer private image registry services.

`environment variables <https://en.wikipedia.org/wiki/Environment_variable>`_
A set of named values that can affect the way running processes will
behave on a computer. Some common examples are ``PATH``, ``HOME``, and
Expand Down
37 changes: 0 additions & 37 deletions doc/source/google/future-user-node-pool.rst

This file was deleted.

47 changes: 45 additions & 2 deletions doc/source/google/step-zero-gcp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,14 +102,57 @@ your google cloud account.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=<GOOGLE-EMAIL-ACCOUNT>

Replace `<GOOGLE-EMAIL-ACCOUNT>` with the exact email of the Google account
you used to sign up for Google Cloud.

.. note::

Did you enter your email correctly? If not, you can run `kubectl delete
clusterrolebinding cluster-admin-binding` and do it again.

7. [optional] Create a node pool for users

This is an optional step, for those who want to separate
user pods from "core" pods such as the Hub itself and others.
See :doc:`../optimization` for details on using a dedicated user node pool.

The nodes in this node pool are for the users only. The node pool has
autoscaling enabled along with a lower and an upper scaling limit. This
means that the amount of nodes is automatically adjusted along with the
amount of users scheduled.

The `n1-standard-2` machine type has 2 CPUs and 7.5 GB of RAM each of which
about 0.2 CPU will be requested by system pods. It is a suitable choice for a
free account that has a limit on a total of 8 CPU cores.

Note that the node pool is *tainted*. Only user pods that are configured
with a *toleration* for this taint can schedule on the node pool's nodes.
This is done in order to ensure the autoscaler will be able to scale down
when the user pods have stopped.

.. code-block:: bash

gcloud beta container node-pools create user-pool \
--machine-type n1-standard-2 \
--num-nodes 0 \
--enable-autoscaling \
--min-nodes 0 \
--max-nodes 3 \
--node-labels hub.jupyter.org/node-purpose=user \
--node-taints hub.jupyter.org_dedicated=user:NoSchedule


.. preemptible node recommendation not included
.. pending handling of evictions in jupyterhub/kubespawner#223
.. .. note::

.. Consider adding the ``--preemptible`` flag to reduce the cost
.. significantly. You can `compare the prices here
.. <https://cloud.google.com/compute/docs/machine-types>`_. See
.. the `preemptible node documentation
.. <https://cloud.google.com/compute/docs/instances/preemptible>`_ for more
.. information.

Congrats. Now that you have your Kubernetes cluster running, it's time to
begin :ref:`creating-your-jupyterhub`.
4 changes: 3 additions & 1 deletion doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ page`_. If you have tips or deployments that you would like to share, see

This documentation is for jupyterhub chart version |release|, which deploys JupyterHub |hub_version|.

This version of the chart requires kubernetes ≥1.11 and helm ≥2.11.


.. _about-guide:

Expand Down Expand Up @@ -144,7 +146,7 @@ up, managing, and maintaining JupyterHub.

We hope that you will use this section to share deployments with on a variety
of infrastructure and for different use cases.
There is also a `community maintained list <users-list.html>`_ of users of this
There is also a :doc:`community maintained list <users-list>` of users of this
Guide and the JupyterHub Helm Chart.

Please submit a pull request to add to this section. Thanks.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,9 +245,9 @@ This section about scaling down efficiently, will also explains how the *user
scheduler* can help you reduce the failures to scale down due to blocking user
pods.

#### Using a user dedicated node pool
#### Using a dedicated node pool for users

To set up a user dedicated node pool, we can use [*taints and
To set up a dedicated node pool for user pods, we can use [*taints and
tolerations*](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).
If we add a taint to all the nodes in the node pool, and a toleration on the
user pods to tolerate being scheduled on a tainted node, we have practically
Expand Down
Loading