Skip to content

Commit

Permalink
Add admonition type to shortcode (#9482)
Browse files Browse the repository at this point in the history
* Change existing admon blocks

* Fix includes issue
  • Loading branch information
lucperkins authored and k8s-ci-robot committed Nov 6, 2018
1 parent e839031 commit d65e179
Show file tree
Hide file tree
Showing 192 changed files with 673 additions and 543 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,9 @@ $ gsutil mb gs://my-spark-models
```
You’ll need to change this URI to something that is unique for you. This will create a bucket that you can use in the example above.

**Note** : Computing the model and saving it is much slower than computing the model and throwing it away. This is expected. However, if you plan to reuse a model, it’s faster to compute the model and save it and then restore it each time you want to use it, rather than throw away and recompute the model each time.
{{< note >}}
Computing the model and saving it is much slower than computing the model and throwing it away. This is expected. However, if you plan to reuse a model, it’s faster to compute the model and save it and then restore it each time you want to use it, rather than throw away and recompute the model each time.
{{< /note >}}

### Using Horizontal Pod Autoscaling with Spark (Optional)&nbsp;
Spark is somewhat elastic to workers coming and going, which means we have an opportunity: we can use use [Kubernetes Horizontal Pod Autoscaling](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) to scale-out the Spark worker pool automatically, setting a target CPU threshold for the workers and a minimum/maximum pool size. This obviates the need for having to configure the number of worker replicas manually.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You can submit a blog post for consideration one of two ways:
If you have a post that you want to remain confidential until your publish date, please submit your post via the Google form. Otherwise, you can choose your submission process based on your comfort level and preferred workflow.

{{< note >}}
**Note:** Our workflow hasn't changed for confidential advance drafts. Additionally, we'll coordinate publishing for time sensitive posts to ensure that information isn't released prematurely through an open pull request.
Our workflow hasn't changed for confidential advance drafts. Additionally, we'll coordinate publishing for time sensitive posts to ensure that information isn't released prematurely through an open pull request.
{{< /note >}}

### Call for reviewers
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/architecture/cloud-controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ In version 1.9, the CCM runs the following controllers from the preceding list:
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.

{{< note >}}
**Note:** Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
{{< /note >}}

The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ A Pod that does not have any tolerations gets scheduled according to the old mod
tolerates the taints of a particular Node can be scheduled on that Node.

{{< caution >}}
**Caution:** Enabling this feature creates a small delay between the
Enabling this feature creates a small delay between the
time when a condition is observed and when a taint is created. This delay is usually less than one second, but it can increase the number of Pods that are successfully scheduled but rejected by the kubelet.
{{< /caution >}}

Expand Down Expand Up @@ -128,7 +128,7 @@ services are running -- it is eligible to run a pod. Otherwise, it is
ignored for any cluster activity until it becomes valid.

{{< note >}}
**Note:** Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
You must explicitly delete the Node object to stop this process.
{{< /note >}}

Expand Down Expand Up @@ -241,7 +241,7 @@ kubectl cordon $NODENAME
```

{{< note >}}
**Note:** Pods created by a DaemonSet controller bypass the Kubernetes scheduler
Pods created by a DaemonSet controller bypass the Kubernetes scheduler
and do not respect the unschedulable attribute on a node. This assumes that daemons belong on
the machine even if it is being drained of applications while it prepares for a reboot.
{{< /note >}}
Expand Down
16 changes: 12 additions & 4 deletions content/en/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,9 @@ You can use `kubectl logs` to retrieve logs from a previous instantiation of a c

Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.

**Note:** The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
{{< note >}}
The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.
{{< /note >}}

By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.

Expand All @@ -81,13 +83,15 @@ When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file, returning the contents in the response.

**Note:** Currently, if some external system has performed the rotation,
{{< note >}}
Currently, if some external system has performed the rotation,
only the contents of the latest log file will be available through
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
the rotation and there are two files, one 10MB in size and one empty,
`kubectl logs` will return an empty response.

[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
{{< /note >}}

### System component logs

Expand Down Expand Up @@ -215,10 +219,12 @@ If the node-level logging agent is not flexible enough for your situation, you
can create a sidecar container with a separate logging agent that you have
configured specifically to run with your application.

**Note**: Using a logging agent in a sidecar container can lead
{{< note >}}
Using a logging agent in a sidecar container can lead
to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, because they are not controlled
by the kubelet.
{{< /note >}}

As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
which uses fluentd as a logging agent. Here are two configuration files that
Expand All @@ -227,9 +233,11 @@ a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to c

{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}

**Note**: The configuration of fluentd is beyond the scope of this article. For
{{< note >}}
The configuration of fluentd is beyond the scope of this article. For
information about configuring fluentd, see the
[official fluentd documentation](http://docs.fluentd.org/).
{{< /note >}}

The second file describes a pod that has a sidecar container running fluentd.
The pod mounts a volume where fluentd can pick up its configuration data.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ Currently, resources are created without this annotation, so the first invocatio
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.

{{< note >}}
**Note:** To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`.
To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`.
{{< /note >}}

### kubectl edit
Expand Down
10 changes: 7 additions & 3 deletions content/en/docs/concepts/configuration/assign-pod-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ with a standard set of labels. As of Kubernetes v1.4 these labels are
* `beta.kubernetes.io/arch`

{{< note >}}
**Note:** The value of these labels is cloud provider specific and is not guaranteed to be reliable.
The value of these labels is cloud provider specific and is not guaranteed to be reliable.
For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments
and a different value in other environments.
{{< /note >}}
Expand Down Expand Up @@ -173,11 +173,15 @@ like node, rack, cloud provider zone, cloud provider region, etc. You express it
key for the node label that the system uses to denote such a topology domain, e.g. see the label keys listed above
in the section [Interlude: built-in node labels](#interlude-built-in-node-labels).

**Note:** Inter-pod affinity and anti-affinity require substantial amount of
{{< note >}}
Inter-pod affinity and anti-affinity require substantial amount of
processing which can slow down scheduling in large clusters significantly. We do
not recommend using them in clusters larger than several hundred nodes.
{{< /note >}}

**Note:** Pod anti-affinity requires nodes to be consistently labelled, i.e. every node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior.
{{< note >}}
Pod anti-affinity requires nodes to be consistently labelled, i.e. every node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior.
{{< /note >}}

As with node affinity, there are currently two types of pod affinity and anti-affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
`preferredDuringSchedulingIgnoredDuringExecution` which denote "hard" vs. "soft" requirements.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,9 @@ When using Docker:
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
every 100ms. A container cannot use more than its share of CPU time during this interval.

{{< note >}}**Note**: The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.{{</ note >}}
{{< note >}}
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
{{</ note >}}

- The `spec.containers[].resources.limits.memory` is converted to an integer, and
used as the value of the
Expand Down Expand Up @@ -317,7 +319,7 @@ Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managi
This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.

{{< note >}}
**Note:** If an optional runtime partition is used, root partition will not hold any image layer or writable layers.
If an optional runtime partition is used, root partition will not hold any image layer or writable layers.
{{< /note >}}

### Requests and limits setting for local ephemeral storage
Expand Down Expand Up @@ -420,7 +422,7 @@ http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
```

{{< note >}}
**Note**: In the preceding request, `~1` is the encoding for the character `/`
In the preceding request, `~1` is the encoding for the character `/`
in the patch path. The operation path value in JSON-Patch is interpreted as a
JSON-Pointer. For more details, see
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
Expand Down Expand Up @@ -476,15 +478,15 @@ Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
_invalid_ quantities are `0.5` and `1500m`.

{{< note >}}
**Note:** Extended resources replace Opaque Integer Resources.
Extended resources replace Opaque Integer Resources.
Users can use any domain name prefix other than `kubernetes.io` which is reserved.
{{< /note >}}

To consume an extended resource in a Pod, include the resource name as a key
in the `spec.containers[].resources.limits` map in the container spec.

{{< note >}}
**Note:** Extended resources cannot be overcommitted, so request and limit
Extended resources cannot be overcommitted, so request and limit
must be equal if both are present in a container spec.
{{< /note >}}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ find the information it needs to choose a cluster and communicate with the API s
of a cluster.

{{< note >}}
**Note:** A file that is used to configure access to clusters is called
A file that is used to configure access to clusters is called
a *kubeconfig file*. This is a generic way of referring to configuration files.
It does not mean that there is a file named `kubeconfig`.
{{< /note >}}
Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/configuration/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,15 +84,15 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.

{{< note >}}
**Note:** To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), for example `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`. The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.
To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), for example `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`. The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.
{{< /note >}}

{{< note >}}
**Note:** You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
{{< /note >}}

{{< note >}}
**Note:** The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
{{< /note >}}

## Using kubectl
Expand Down
33 changes: 12 additions & 21 deletions content/en/docs/concepts/configuration/pod-priority-preemption.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Kubernetes Version | Priority and Preemption State | Enabled by default
1.10 | alpha | no
1.11 | beta | yes

{{< warning >}} **Warning**: In a cluster where not all users are trusted, a
{{< warning >}}In a cluster where not all users are trusted, a
malicious user could create pods at the highest possible priorities, causing
other pods to be evicted/not get scheduled. To resolve this issue,
[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) is
Expand Down Expand Up @@ -71,24 +71,13 @@ Pods.

## How to disable preemption

{{< note >}} **Note**: In Kubernetes 1.11, critical pods (except DaemonSet pods,
which are still scheduled by the DaemonSet controller) rely on scheduler
preemption to be scheduled when a cluster is under resource pressure. For this
reason, you will need to run an older version of Rescheduler if you decide to
disable preemption. More on this is provided below. {{< /note >}}

#### Option 1: Disable both Pod priority and preemption

Disabling Pod priority disables preemption as well. In order to disable Pod
Priority, set the feature to false for API server, Scheduler, and Kubelet.
Disabling the feature on Kubelets is not vital. You can leave the feature on for
Kubelets if rolling out is hard.

```
--feature-gates=PodPriority=false
```

#### Option 2: Disable Preemption only
{{< note >}}
In Kubernetes 1.11, critical pods (except DaemonSet pods, which are
still scheduled by the DaemonSet controller) rely on scheduler preemption to be
scheduled when a cluster is under resource pressure. For this reason, you will
need to run an older version of Rescheduler if you decide to disable preemption.
More on this is provided below.
{{< /note >}}

In Kubernetes 1.11 and later, preemption is controlled by a kube-scheduler flag
`disablePreemption`, which is set to `false` by default.
Expand Down Expand Up @@ -266,11 +255,13 @@ A Node is considered for preemption only when the answer to this question is
yes: "If all the Pods with lower priority than the pending Pod are removed from
the Node, can the pending Pod be scheduled on the Node?"

{{< note >}} **Note:** Preemption does not necessarily remove all lower-priority
{{< note >}}
Preemption does not necessarily remove all lower-priority
Pods. If the pending Pod can be scheduled by removing fewer than all
lower-priority Pods, then only a portion of the lower-priority Pods are removed.
Even so, the answer to the preceding question must be yes. If the answer is no,
the Node is not considered for preemption. {{< /note >}}
the Node is not considered for preemption.
{{< /note >}}

If a pending Pod has inter-pod affinity to one or more of the lower-priority
Pods on the Node, the inter-Pod affinity rule cannot be satisfied in the absence
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,11 @@ algorithmSource:
percentageOfNodesToScore: 50
```
{{< note >}} **Note**: In clusters with zero or less than 50 feasible nodes, the
{{< note >}}
In clusters with zero or less than 50 feasible nodes, the
scheduler still checks all the nodes, simply because there are not enough
feasible nodes to stop the scheduler's search early. {{< /note >}}
feasible nodes to stop the scheduler's search early.
{{< /note >}}
**To disable this feature**, you can set `percentageOfNodesToScore` to 100.

Expand Down
Loading

0 comments on commit d65e179

Please sign in to comment.