Skip to content

Commit 50e95a0

Browse files
author
David Oppenheimer
committed
Absolutize links that leave the docs/ tree to go anywhere other than
to examples/ or back to docs/
1 parent d414e29 commit 50e95a0

27 files changed

+87
-87
lines changed

docs/admin/authorization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
113113
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
114114
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}`
115115

116-
[Complete file example](../../pkg/auth/authorizer/abac/example_policy_file.jsonl)
116+
[Complete file example](http://releases.k8s.io/HEAD/pkg/auth/authorizer/abac/example_policy_file.jsonl)
117117

118118
## Plugin Development
119119

docs/admin/cluster-components.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -97,17 +97,17 @@ selects a node for them to run on.
9797
Addons are pods and services that implement cluster features. They don't run on
9898
the master VM, but currently the default setup scripts that make the API calls
9999
to create these pods and services does run on the master VM. See:
100-
[kube-master-addons](../../cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
100+
[kube-master-addons](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
101101

102102
Addon objects are created in the "kube-system" namespace.
103103

104104
Example addons are:
105-
* [DNS](../../cluster/addons/dns/) provides cluster local DNS.
106-
* [kube-ui](../../cluster/addons/kube-ui/) provides a graphical UI for the
105+
* [DNS](http://releases.k8s.io/HEAD/cluster/addons/dns/) provides cluster local DNS.
106+
* [kube-ui](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) provides a graphical UI for the
107107
cluster.
108-
* [fluentd-elasticsearch](../../cluster/addons/fluentd-elasticsearch/) provides
109-
log storage. Also see the [gcp version](../../cluster/addons/fluentd-gcp/).
110-
* [cluster-monitoring](../../cluster/addons/cluster-monitoring/) provides
108+
* [fluentd-elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) provides
109+
log storage. Also see the [gcp version](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/).
110+
* [cluster-monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) provides
111111
monitoring for the cluster.
112112

113113
## Node components

docs/admin/cluster-large.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and
4141

4242
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
4343

44-
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).
44+
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/HEAD/cluster/gce/config-default.sh)).
4545

4646
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
4747

@@ -82,15 +82,15 @@ These limits, however, are based on data collected from addons running on 4-node
8282
8383
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
8484
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
85-
* Heapster ([GCM/GCL backed](../../cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](../../cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](../../cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](../../cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
86-
* [InfluxDB and Grafana](../../cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
87-
* [skydns, kube2sky, and dns etcd](../../cluster/addons/dns/skydns-rc.yaml.in)
88-
* [Kibana](../../cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
85+
* Heapster ([GCM/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
86+
* [InfluxDB and Grafana](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
87+
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/HEAD/cluster/addons/dns/skydns-rc.yaml.in)
88+
* [Kibana](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
8989
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
90-
* [elasticsearch](../../cluster/addons/fluentd-elasticsearch/es-controller.yaml)
90+
* [elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
9191
* Increase memory and CPU limits sligthly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
92-
* [FluentD with ElasticSearch Plugin](../../cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
93-
* [FluentD with GCP Plugin](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
92+
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
93+
* [FluentD with GCP Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
9494
9595
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.md#troubleshooting).
9696

docs/admin/dns.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
3333

3434
# DNS Integration with Kubernetes
3535

36-
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
36+
As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
3737
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
3838
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
3939

@@ -68,7 +68,7 @@ time.
6868

6969
## For more information
7070

71-
See [the docs for the DNS cluster addon](../../cluster/addons/dns/README.md).
71+
See [the docs for the DNS cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
7272

7373

7474
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

docs/admin/etcd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ to reduce downtime in case of corruption.
5555
## Default configuration
5656

5757
The default setup scripts use kubelet's file-based static pods feature to run etcd in a
58-
[pod](../../cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
58+
[pod](http://releases.k8s.io/HEAD/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
5959
be run on master VMs. The default location that kubelet scans for manifests is
6060
`/etc/kubernetes/manifests/`.
6161

docs/admin/high-availability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
107107
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
108108
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
109109
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
110-
[kubelet init file](../../cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
110+
[kubelet init file](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
111111
scripts.
112112

113113
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and

docs/admin/salt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ We should define a grains.conf key that captures more specifically what network
129129

130130
## Further reading
131131

132-
The [cluster/saltbase](../../cluster/saltbase/) tree has more details on the current SaltStack configuration.
132+
The [cluster/saltbase](http://releases.k8s.io/HEAD/cluster/saltbase/) tree has more details on the current SaltStack configuration.
133133

134134

135135
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

docs/design/event_compression.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
4848

4949
## Design
5050

51-
Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields:
51+
Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
5252
* `FirstTimestamp util.Time`
5353
* The date/time of the first occurrence of the event.
5454
* `LastTimestamp util.Time`
@@ -72,7 +72,7 @@ Each binary that generates events:
7272
* `event.Reason`
7373
* `event.Message`
7474
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
75-
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](../../pkg/client/record/event.go)).
75+
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](http://releases.k8s.io/HEAD/pkg/client/record/event.go)).
7676
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
7777
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
7878
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).

docs/devel/cherry-picks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ particular, they may be self-merged by the release branch owner without fanfare,
5454
in the case the release branch owner knows the cherry pick was already
5555
requested - this should not be the norm, but it may happen.
5656

57-
[Contributor License Agreements](../../CONTRIBUTING.md) is considered implicit
57+
[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit
5858
for all code within cherry-pick pull requests, ***unless there is a large
5959
conflict***.
6060

docs/devel/client-libraries.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
3535

3636
### Supported
3737

38-
* [Go](../../pkg/client/)
38+
* [Go](http://releases.k8s.io/HEAD/pkg/client/)
3939

4040
### User Contributed
4141

docs/devel/development.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Documentation for other releases can be found at
3535

3636
# Releases and Official Builds
3737

38-
Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
38+
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
3939

4040
## Go development environment
4141

@@ -324,7 +324,7 @@ The conformance test runs a subset of the e2e-tests against a manually-created c
324324
require support for up/push/down and other operations. To run a conformance test, you need to know the
325325
IP of the master for your cluster and the authorization arguments to use. The conformance test is
326326
intended to run against a cluster at a specific binary release of Kubernetes.
327-
See [conformance-test.sh](../../hack/conformance-test.sh).
327+
See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh).
328328

329329
## Testing out flaky tests
330330

docs/devel/getting-builds.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
3333

3434
# Getting Kubernetes Builds
3535

36-
You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
36+
You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
3737

3838
```console
3939
usage:

docs/devel/scheduler.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,30 +53,30 @@ divided by the node's capacity).
5353
Finally, the node with the highest priority is chosen
5454
(or, if there are multiple such nodes, then one of them is chosen at random). The code
5555
for this main scheduling loop is in the function `Schedule()` in
56-
[plugin/pkg/scheduler/generic_scheduler.go](../../plugin/pkg/scheduler/generic_scheduler.go)
56+
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go)
5757

5858
## Scheduler extensibility
5959

6060
The scheduler is extensible: the cluster administrator can choose which of the pre-defined
6161
scheduling policies to apply, and can add new ones. The built-in predicates and priorities are
62-
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
63-
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](../../plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
62+
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
63+
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
6464
The policies that are applied when scheduling can be chosen in one of two ways. Normally,
6565
the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in
66-
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
66+
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
6767
However, the choice of policies
6868
can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON
6969
file specifying which scheduling policies to use. See
7070
[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example
7171
config file. (Note that the config file format is versioned; the API is defined in
72-
[plugin/pkg/scheduler/api](../../plugin/pkg/scheduler/api/)).
72+
[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)).
7373
Thus to add a new scheduling policy, you should modify predicates.go or priorities.go,
7474
and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file.
7575

7676
## Exploring the code
7777

7878
If you want to get a global picture of how the scheduler works, you can start in
79-
[plugin/cmd/kube-scheduler/app/server.go](../../plugin/cmd/kube-scheduler/app/server.go)
79+
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go)
8080

8181

8282
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

0 commit comments

Comments
 (0)