Skip to content

Commit

Permalink
Merge pull request kubernetes#24572 from reylejano-rxm/merged-master-…
Browse files Browse the repository at this point in the history
…dev-1.20

Merge master into dev-1.20 to keep in sync
  • Loading branch information
k8s-ci-robot authored Oct 15, 2020
2 parents f1ac8ef + 34695f9 commit 091d314
Show file tree
Hide file tree
Showing 206 changed files with 3,873 additions and 4,254 deletions.
6 changes: 4 additions & 2 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@
Use the default base branch, “master”, if you're documenting existing
features in the English localization.
If you're working on a different localization (not English), or you
are documenting a feature that will be part of a future release, see
If you're working on a different localization (not English), see
https://kubernetes.io/docs/contribute/new-content/overview/#choose-which-git-branch-to-use
for advice.
If you're documenting a feature that will be part of a future release, see
https://kubernetes.io/docs/contribute/new-content/new-features/ for advice.
-->
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,10 @@ container-image:
--build-arg HUGO_VERSION=$(HUGO_VERSION)

container-build: module-check
$(CONTAINER_RUN) $(CONTAINER_IMAGE) hugo --minify
$(CONTAINER_RUN) --read-only $(CONTAINER_IMAGE) hugo --minify

container-serve: module-check
$(CONTAINER_RUN) --mount type=tmpfs,destination=/src/resources,tmpfs-mode=0777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir

test-examples:
scripts/test_examples.sh install
Expand Down
21 changes: 21 additions & 0 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,23 @@ enableGitInfo = true
# Hindi is disabled because it's currently in development.
disableLanguages = ["hi", "no"]

[caches]
[caches.assets]
dir = ":cacheDir/_gen"
maxAge = -1
[caches.getcsv]
dir = ":cacheDir/:project"
maxAge = "60s"
[caches.getjson]
dir = ":cacheDir/:project"
maxAge = "60s"
[caches.images]
dir = ":cacheDir/_images"
maxAge = -1
[caches.modules]
dir = ":cacheDir/modules"
maxAge = -1

[markup]
[markup.goldmark]
[markup.goldmark.extensions]
Expand Down Expand Up @@ -66,6 +83,10 @@ date = ["date", ":filename", "publishDate", "lastmod"]
[permalinks]
blog = "/:section/:year/:month/:day/:slug/"

[sitemap]
filename = "sitemap.xml"
priority = 0.75

# Be explicit about the output formats. We (currently) only want an RSS feed for the home page.
[outputs]
home = [ "HTML", "RSS", "HEADERS" ]
Expand Down
2 changes: 1 addition & 1 deletion content/de/docs/tasks/tools/install-kubectl.md
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ Sie müssen nun sicherstellen, dass das kubectl-Abschlussskript in allen Ihren S
```

{{< note >}}
bash-completion bezieht alle Verfollständigungsskripte aus `/etc/bash_completion.d`.
bash-completion bezieht alle Vervollständigungsskripte aus `/etc/bash_completion.d`.
{{< /note >}}

Beide Ansätze sind gleichwertig. Nach dem erneuten Laden der Shell sollte kubectl autocompletion funktionieren.
Expand Down
2 changes: 2 additions & 0 deletions content/en/_index.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Production-Grade Container Orchestration"
abstract: "Automated container deployment, scaling, and management"
cid: home
sitemap:
priority: 1.0
---

{{< blocks/section id="oceanNodes" >}}
Expand Down
43 changes: 43 additions & 0 deletions content/en/blog/_posts/2020-10-12-steering-committee-results.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
---
layout: blog
title: "Announcing the 2020 Steering Committee Election Results"
date: 2020-10-12
slug: steering-committee-results-2020
---

**Author**: Kaslin Fields

The [2020 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2020) is now complete. In 2019, the committee arrived at its final allocation of 7 seats, 3 of which were up for election in 2020. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.

This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).

## Results

Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):

* **Davanum Srinivas ([@dims](https://github.com/dims)), VMware**
* **Jordan Liggitt ([@liggitt](https://github.com/liggitt)), Google**
* **Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables)), Google**

They join continuing members Christoph Blecker ([@cblecker](https://github.com/cblecker)), Red Hat; Derek Carr ([@derekwaynecarr](https://github.com/derekwaynecarr)), Red Hat; Nikhita Raghunath ([@nikhita](https://github.com/nikhita)), VMware; and Paris Pittman ([@parispittman](https://github.com/parispittman)), Apple. Davanum Srinivas is returning for his second term on the committee.

## Big Thanks!

* Thank you and congratulations on a successful election to this round’s election officers:
* Jaice Singer DuMars ([@jdumars](https://github.com/jdumars)), Apple
* Ihor Dvoretskyi ([@idvoretskyi](https://github.com/idvoretskyi)), CNCF
* Josh Berkus ([@jberkus](https://github.com/jberkus)), Red Hat
* Thanks to the Emeritus Steering Committee Members. Your prior service is appreciated by the community:
* Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp)), Google
* and Lachlan Evenson([@lachie8e)](https://github.com/lachie8e)), Microsoft
* And thank you to all the candidates who came forward to run for election. As [Jorge Castro put it](https://twitter.com/castrojo/status/1315718627639820288?s=20): we are spoiled with capable, kind, and selfless volunteers who put the needs of the project first.

## Get Involved with the Steering Committee

This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee [backlog items](https://github.com/kubernetes/steering/projects/1) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They have an open meeting on [the first Monday of the month at 6pm UTC](https://github.com/kubernetes/steering) and regularly attend Meet Our Contributors. They can also be contacted at their public mailing list steering@kubernetes.io.

You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).

----

_This post was written by the [Upstream Marketing Working Group](https://github.com/kubernetes/community/tree/master/communication/marketing-team#contributor-marketing). If you want to write stories about the Kubernetes community, learn more about us._
2 changes: 2 additions & 0 deletions content/en/docs/_index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
---
linktitle: Kubernetes Documentation
title: Documentation
sitemap:
priority: 1.0
---
2 changes: 1 addition & 1 deletion content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ a Lease object.

#### Reliability

In most cases, node controller limits the eviction rate to
In most cases, the node controller limits the eviction rate to
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.

Expand Down
3 changes: 2 additions & 1 deletion content/en/docs/concepts/configuration/configmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,8 @@ metadata:
spec:
containers:
- name: demo
image: game.example/demo-game
image: alpine
command: ["sleep", "3600"]
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,13 @@ Limits can be implemented either reactively (the system intervenes once it sees
or by enforcement (the system prevents the container from ever exceeding the limit). Different
runtimes can have different ways to implement the same restrictions.

{{< note >}}
If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes
automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own
CPU limit, but does not specify a CPU request, Kubernetes automatically assigns a CPU request that matches
the limit.
{{< /note >}}

## Resource types

*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
Expand Down
25 changes: 18 additions & 7 deletions content/en/docs/concepts/containers/container-lifecycle-hooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ No parameters are passed to the handler.

This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
so it must complete before the call to delete the container can be sent.
so it must complete before the signal to stop the container can be sent.
No parameters are passed to the handler.

A more detailed description of the termination behavior can be found in
Expand All @@ -56,18 +56,30 @@ Resources consumed by the command are counted against the Container.
### Hook handler execution

When a Container lifecycle management hook is called,
the Kubernetes management system executes the handler in the Container registered for that hook. 
the Kubernetes management system execute the handler according to the hook action,
`exec` and `tcpSocket` are executed in the container, and `httpGet` is executed by the kubelet process.

Hook handler calls are synchronous within the context of the Pod containing the Container.
This means that for a `PostStart` hook,
the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.

The behavior is similar for a `PreStop` hook.
If the hook hangs during execution,
the Pod phase stays in a `Terminating` state and is killed after `terminationGracePeriodSeconds` of pod ends.
If a `PostStart` or `PreStop` hook fails,
`PreStop` hooks are not executed asynchronously from the signal
to stop the Container; the hook must complete its execution before
the signal can be sent.
If a `PreStop` hook hangs during execution,
the Pod's phase will be `Terminating` and remain there until the Pod is
killed after its `terminationGracePeriodSeconds` expires.
This grace period applies to the total time it takes for both
the `PreStop` hook to execute and for the Container to stop normally.
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
takes 55 seconds to complete, and the Container takes 10 seconds to stop
normally after receiving the signal, then the Container will be killed
before it can stop normally, since `terminationGracePeriodSeconds` is
less than the total time (55+10) it takes for these two things to happen.

If either a `PostStart` or `PreStop` hook fails,
it kills the Container.

Users should make their hook handlers as lightweight as possible.
Expand Down Expand Up @@ -121,4 +133,3 @@ Events:
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).


Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,17 @@ weight: 10

<!-- overview -->

{{< feature-state state="alpha" >}}
{{< caution >}}Alpha features can change rapidly. {{< /caution >}}

Network plugins in Kubernetes come in a few flavors:

* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
* CNI plugins: adhere to the [Container Network Interface](https://github.com/containernetworking/cni) (CNI) specification, designed for interoperability.
* Kubernetes follows the [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) release of the CNI specification.
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins



<!-- body -->

## Installation

The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:

* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
Expand Down Expand Up @@ -166,9 +162,4 @@ This option is provided to the network-plugin; currently **only kubenet supports
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.



## {{% heading "whatsnext" %}}



2 changes: 2 additions & 0 deletions content/en/docs/concepts/overview/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,6 @@
title: "Overview"
weight: 20
description: Get a high-level outline of Kubernetes and the components it is built from.
sitemap:
priority: 0.9
---
17 changes: 13 additions & 4 deletions content/en/docs/concepts/overview/kubernetes-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ The Kubernetes API server serves an OpenAPI spec via the `/openapi/v2` endpoint.
You can request the response format using request headers as follows:

<table>
<caption style="display:none">Valid request header values for OpenAPI v2 queries</caption>
<thead>
<tr>
<th>Header</th>
Expand Down Expand Up @@ -68,7 +69,6 @@ You can request the response format using request headers as follows:
<td><em>serves </em><code>application/json</code></td>
</tr>
</tbody>
<caption>Valid request header values for OpenAPI v2 queries</caption>
</table>

Kubernetes implements an alternative Protobuf based serialization format that
Expand Down Expand Up @@ -102,13 +102,22 @@ to ensure that the API presents a clear, consistent view of system resources
and behavior, and to enable controlling access to end-of-life and/or
experimental APIs.

Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
for more details on the API version level definitions.

To make it easier to evolve and to extend its API, Kubernetes implements
[API groups](/docs/reference/using-api/api-overview/#api-groups) that can be
[enabled or disabled](/docs/reference/using-api/api-overview/#enabling-or-disabling).

API resources are distinguished by their API group, resource type, namespace
(for namespaced resources), and name. The API server may serve the same
underlying data through multiple API version and handle the conversion between
API versions transparently. All these different versions are actually
representations of the same resource. For example, suppose there are two
versions `v1` and `v1beta1` for the same resource. An object created by the
`v1beta1` version can then be read, updated, and deleted by either the
`v1beta1` or the `v1` versions.

Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
for more details on the API version level definitions.

## API Extension

The Kubernetes API can be extended in one of two ways:
Expand Down
2 changes: 2 additions & 0 deletions content/en/docs/concepts/overview/what-is-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ weight: 10
card:
name: concepts
weight: 10
sitemap:
priority: 0.9
---

<!-- overview -->
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@ resource can only be in one namespace.

Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).

In future versions of Kubernetes, objects in the same namespace will have the same
access control policies by default.

It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The kube-scheduler can be configured to enable bin packing of resources along wi

## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation

Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters and improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
consists of `name` which specifies the resource to be considered during scoring and `weight` specify the weight of each resource.

Below is an example configuration that sets `requestedToCapacityRatioArguments` to bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ When you set `setHostnameAsFQDN: true` in the Pod spec, the kubelet writes the P
{{< note >}}
In Linux, the hostname field of the kernel (the `nodename` field of `struct utsname`) is limited to 64 characters.

If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from pod hostname and cluster domain, FQDN `long-FDQN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
{{< /note >}}

### Pod's DNS Policy
Expand Down
4 changes: 4 additions & 0 deletions content/en/docs/concepts/services-networking/service.md
Original file line number Diff line number Diff line change
Expand Up @@ -881,6 +881,10 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# value. Defaults to 5, must be between 2 and 60

service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of existing security groups to be added to ELB created. Unlike the annotation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB.

service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the ELB

Expand Down
Loading

0 comments on commit 091d314

Please sign in to comment.