Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix v3.2 links #454

Merged
merged 1 commit into from
Aug 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,6 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.

[c7146bd5]: https://github.com/etcd-io/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks
[etcd-2.1-benchmark]: ../etcd-2-1-0-alpha-benchmarks/
[hack-benchmark]: https://github.com/etcd-io/etcd/tree/v2.3.8/hack/benchmark
[hey]: https://github.com/rakyll/hey
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/benchmarks/etcd-3-demo-benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,4 +43,4 @@ The performance is nearly the same as the one with empty server handler.
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.

[etcd-v3-benchmark]: ../op-guide/performance/#benchmarks
[etcd-v3-benchmark]: ../../op-guide/performance/#benchmarks
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/dev-guide/api_grpc_gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ curl -L http://localhost:2379/v3alpha/kv/txn \

Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].

[api-ref]: ./api_reference_v3
[api-ref]: ../api_reference_v3/
[etcdctl]: https://github.com/etcd-io/etcd/tree/master/etcdctl
[go-client]: https://github.com/etcd-io/etcd/tree/master/client/v3
[grpc]: http://www.grpc.io/
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/v3.2/dev-guide/local_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,5 +87,5 @@ hello

To learn more about interacting with etcd, read [interacting with etcd section][interacting].

[clustering]: ../op-guide/clustering
[interacting]: ./interacting_v3
[clustering]: ../../op-guide/clustering/
[interacting]: ../interacting_v3/
16 changes: 8 additions & 8 deletions content/en/docs/v3.2/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,19 +135,19 @@ If none of the above suggestions clear the warnings, please [open an issue][new_
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.


[api-mvcc]: learning/api#revisions
[backend_commit_metrics]: ./metrics#disk
[api-mvcc]: ../learning/api/#revisions
[backend_commit_metrics]: ../metrics/#disk
[backup]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/Documentation/op-guide/recovery.md#snapshotting-the-keyspace
[benchmark]: https://github.com/etcd-io/etcd/tree/v3.2.18/tools/benchmark
[benchmark-result]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/Documentation/op-guide/performance.md
[chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf
[hardware-setup]: ./op-guide/hardware
[maintenance-compact]: op-guide/maintenance#history-compaction
[maintenance-defragment]: op-guide/maintenance#defragmentation
[hardware-setup]: ../op-guide/hardware/
[maintenance-compact]: ../op-guide/maintenance/#history-compaction
[maintenance-defragment]: ../op-guide/maintenance/#defragmentation
[maintenance-disarm]: https://github.com/etcd-io/etcd/tree/master/etcdctl#alarm-disarm
[new_issue]: https://github.com/etcd-io/etcd/issues/new
[raft]: https://raft.github.io/raft.pdf
[runtime reconfiguration]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/Documentation/op-guide/runtime-configuration.md
[supported-platform]: ./op-guide/supported-platform
[tuning]: ./tuning
[wal_fsync_duration_seconds]: ./metrics#disk
[supported-platform]: ../op-guide/supported-platform/
[tuning]: ../tuning/
[wal_fsync_duration_seconds]: ../metrics/#disk
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/learning/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@ message LeaseKeepAliveResponse {
* TTL - the new time-to-live, in seconds, that the lease has remaining.

[elections]: https://github.com/etcd-io/etcd/blob/master/client/v3/concurrency/election.go
[grpc-api]: ../dev-guide/api_reference_v3
[grpc-api]: ../../dev-guide/api_reference_v3/
[grpc-service]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/etcdserver/etcdserverpb/rpc.proto
[kv-proto]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/mvcc/mvccpb/kv.proto
[locks]: https://github.com/etcd-io/etcd/blob/master/client/v3/concurrency/mutex.go
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/learning/api_guarantees.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@ etcd ensures linearizability for all other operations by default. Linearizabilit
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[txn]: api#transaction
[txn]: ../api/#transaction
18 changes: 9 additions & 9 deletions content/en/docs/v3.2/learning/why.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,24 +90,24 @@ For distributed coordination, choosing etcd can help prevent operational headach
[container-linux]: https://coreos.com/why
[curator]: http://curator.apache.org/
[dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul
[etcd-commonname]: ../op-guide/authentication#using-tls-common-name
[etcd-commonname]: ../../op-guide/authentication/#using-tls-common-name
[etcd-etcdctl-elect]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#elect-options-election-name-proposal
[etcd-etcdctl-lock]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#lock-lockname-command-arg1-arg2-
[etcd-json]: ../dev-guide/api_grpc_gateway
[etcd-linread]: api_guarantees#linearizability
[etcd-mvcc]: data_model
[etcd-rbac]: ../op-guide/authentication#working-with-roles
[etcd-json]: ../../dev-guide/api_grpc_gateway/
[etcd-linread]: ../api_guarantees/#linearizability
[etcd-mvcc]: ../data_model/
[etcd-rbac]: ../../op-guide/authentication#working-with-roles
[etcd-recipe]: https://pkg.go.dev/github.com/etcd-io/etcd/contrib/recipes
[etcd-reconfig]: ../op-guide/runtime-configuration
[etcd-txn]: api#transaction
[etcd-reconfig]: ../../op-guide/runtime-configuration
[etcd-txn]: ../api/#transaction
[etcd-v3election]: https://pkg.go.dev/github.com/etcd-io/etcd/etcdserver/api/v3election/v3electionpb
[etcd-v3lock]: https://pkg.go.dev/github.com/etcd-io/etcd/etcdserver/api/v3lock/v3lockpb
[etcd-watch]: api#watch-streams
[etcd-watch]: ../api/#watch-streams
[grpc]: http://www.grpc.io
[kubernetes]: http://kubernetes.io/docs/whatisk8s
[locksmith]: https://github.com/coreos/locksmith
[newsql-leader]: http://dl.acm.org/citation.cfm?id=2960999
[production-users]: ../production-users
[production-users]: ../../production-users/
[spanner]: https://cloud.google.com/spanner/
[spanner-roles]: https://cloud.google.com/spanner/docs/iam#roles
[tidb]: https://github.com/pingcap/tidb
Expand Down
14 changes: 7 additions & 7 deletions content/en/docs/v3.2/op-guide/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -469,13 +469,13 @@ When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mod
To setup an etcd cluster with proxies of v2 API, please read the the [clustering doc in etcd 2.3 release][clustering_etcd2].

[clustering_etcd2]: https://github.com/etcd-io/etcd/blob/release-2.3/Documentation/clustering.md
[conf-adv-client]: configuration#--advertise-client-urls
[conf-listen-client]: configuration#--listen-client-urls
[discovery-proto]: ../dev-internal/discovery_protocol
[gateway]: gateway
[conf-adv-client]: ../configuration/#--advertise-client-urls
[conf-listen-client]: ../configuration/#--listen-client-urls
[discovery-proto]: ../../dev-internal/discovery_protocol/
[gateway]: ../gateway/
[proxy]: https://github.com/etcd-io/etcd/blob/release-2.3/Documentation/proxy.md
[rfc-srv]: http://www.ietf.org/rfc/rfc2052.txt
[runtime-conf]: runtime-configuration
[runtime-reconf-design]: runtime-reconf-design
[security-guide]: security
[runtime-conf]: ../runtime-configuration/
[runtime-reconf-design]: ../runtime-reconf-design/
[security-guide]: ../security/
[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup
14 changes: 7 additions & 7 deletions content/en/docs/v3.2/op-guide/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,13 +302,13 @@ Follow the instructions when using these flags.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512'
+ default: "simple"

[build-cluster]: clustering#static
[discovery]: clustering#discovery
[build-cluster]: ../clustering/#static
[discovery]: ../clustering/#discovery
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
[proxy]: /docs/v2.3/proxy
[reconfig]: runtime-configuration
[proxy]: /docs/v2.3/proxy/
[reconfig]: ../runtime-configuration/
[restore]: /docs/v2.3/admin_guide#restoring-a-backup
[security]: security
[static bootstrap]: clustering#static
[security]: ../security/
[static bootstrap]: ../clustering/#static
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: ../tuning#time-parameters
[tuning]: ../../tuning/#time-parameters
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/container.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Run etcd clusters inside containers
---

The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering#static).
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](../clustering/#static).

## rkt

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/v3.2/op-guide/failures.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ A cluster bootstrap is only successful if all required members successfully star

Of course, it is possible to recover a failed bootstrapped cluster like recovering a running cluster. However, it almost always takes more time and resources to recover that cluster than bootstrapping a new one, since there is no data to recover.

[backup]: maintenance#snapshot-backup
[unrecoverable]: recovery
[backup]: ../maintenance/#snapshot-backup
[unrecoverable]: ../recovery/
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/grpc_proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ bar

## Client endpoint synchronization and name resolution

The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../dev-guide/grpc_naming.md).
The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../../dev-guide/grpc_naming/).

Register proxy(s) by providing a user-defined prefix:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,4 +91,4 @@ Example application workload: A 3,000 node Kubernetes cluster

[diskbench]: https://github.com/ongardie/diskbenchmark
[fio]: https://github.com/axboe/fio
[tuning]: ../tuning
[tuning]: ../../tuning/
16 changes: 8 additions & 8 deletions content/en/docs/v3.2/op-guide/runtime-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,13 +163,13 @@ It is enabled by default.

[add member]: #add-a-new-member
[cluster-reconf]: #cluster-reconfiguration-operations
[conf-adv-peer]: configuration#--initial-advertise-peer-urls
[conf-name]: configuration#--name
[disaster recovery]: recovery
[fault tolerance table]: /docs/v2.3/admin_guide#fault-tolerance-table
[conf-adv-peer]: ../configuration/#--initial-advertise-peer-urls
[conf-name]: ../configuration/#--name
[disaster recovery]: ../recovery/
[fault tolerance table]: /docs/v2.3/admin_guide/#fault-tolerance-table
[majority failure]: #restart-cluster-from-majority-failure
[member migration]: /docs/v2.3/admin_guide#member-migration
[member-api]: /docs/v2.3/members_api
[member-api-grpc]: ../dev-guide/api_reference_v3#service-cluster-etcdserveretcdserverpbrpcproto
[member migration]: /docs/v2.3/admin_guide/#member-migration
[member-api]: /docs/v2.3/members_api/
[member-api-grpc]: ../../dev-guide/api_reference_v3/#service-cluster-etcdserveretcdserverpbrpcproto
[remove member]: #remove-a-member
[runtime-reconf]: runtime-reconf-design
[runtime-reconf]: ../runtime-reconf-design/
4 changes: 2 additions & 2 deletions content/en/docs/v3.2/op-guide/runtime-reconf-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,5 +48,5 @@ It seems that using public discovery service is a convenient way to do runtime r

To have a discovery service that supports runtime reconfiguration, the best choice is to build a private one.

[add-member]: runtime-configuration#add-a-new-member
[disaster-recovery]: recovery
[add-member]: ../runtime-configuration/#add-a-new-member
[disaster-recovery]: ../recovery/
2 changes: 1 addition & 1 deletion content/en/docs/v3.2/op-guide/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.

[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
[auth]: authentication
[auth]: ../authentication/
[cfssl]: https://github.com/cloudflare/cfssl
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup
6 changes: 3 additions & 3 deletions content/en/docs/v3.2/platforms/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ This guide assumes operational knowledge of Amazon Web Services (AWS), specifica

As a critical building block for distributed systems it is crucial to perform adequate capacity planning in order to support the intended cluster workload. As a highly available and strongly consistent data store increasing the number of nodes in an etcd cluster will generally affect performance adversely. This makes sense intuitively, as more nodes means more members for the leader to coordinate state across. The most direct way to increase throughput and decrease latency of an etcd cluster is allocate more disk I/O, network I/O, CPU, and memory to cluster members. In the event it is impossible to temporarily divert incoming requests to the cluster, scaling the EC2 instances which comprise the etcd cluster members one at a time may improve performance. It is, however, best to avoid bottlenecks through capacity planning.

The etcd team has produced a [hardware recommendation guide](../op-guide/hardware.md) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster.
The etcd team has produced a [hardware recommendation guide](../../op-guide/hardware/) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster.

AWS provides a service for creating groups of EC2 instances which are dynamically sized to match load on the instances. Using an Auto Scaling Group ([ASG](http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html)) to dynamically scale an etcd cluster is not recommended for several reasons including:

* etcd performance is generally inversely proportional to the number of members in a cluster due to the synchronous replication which provides strong consistency of data stored in etcd
* the operational complexity of adding [lifecycle hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to properly add and remove members from an etcd cluster by modifying the [runtime configuration](../op-guide/runtime-configuration.md)
* the operational complexity of adding [lifecycle hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to properly add and remove members from an etcd cluster by modifying the [runtime configuration](../../op-guide/runtime-configuration/)

Auto Scaling Groups do provide a number of benefits besides cluster scaling which include:

Expand Down Expand Up @@ -60,7 +60,7 @@ A highly available etcd cluster is resilient to member loss, however, it is impo

### Performance/Throughput

The performance of an etcd cluster is roughly quantifiable through latency and throughput metrics which are primarily affected by disk and network performance. Detailed performance planning information is provided in the [performance section](../op-guide/performance.md) of the etcd operations guide.
The performance of an etcd cluster is roughly quantifiable through latency and throughput metrics which are primarily affected by disk and network performance. Detailed performance planning information is provided in the [performance section](../../op-guide/performance/) of the etcd operations guide.

#### Network

Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/v3.2/upgrades/upgrade_3_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,15 @@ Following metrics from v3.0.x have been deprecated in favor of [go-grpc-promethe

#### Upgrade requirements

To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](../upgrade_3_0/) before upgrading to 3.1.

Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.

#### Preparation

Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.

Before beginning, [backup the etcd data](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore).
Before beginning, [backup the etcd data](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore).

#### Mixed versions

Expand All @@ -49,7 +49,7 @@ For a much larger total data size, 100MB or more , this one-time process might t

If all members have been upgraded to v3.1, the cluster will be upgraded to v3.1, and downgrade from this completed state is **not possible**. If any single member is still v3.0, however, the cluster and its operations remains "v3.0", and it is possible from this mixed cluster state to return to using a v3.0 etcd binary on all members.

Please [backup the data directory](../op-guide/maintenance#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
Please [backup the data directory](../../op-guide/maintenance/#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.

### Upgrade procedure

Expand Down Expand Up @@ -82,7 +82,7 @@ When each etcd process is stopped, expected errors will be logged by other clust
2017-01-17 09:34:34.364907 W | etcdserver: failed to reach the peerURL(http://localhost:2380) of member fd32987dcd0511e0 (Get http://localhost:2380/version: dial tcp 127.0.0.1:2380: getsockopt: connection refused)
```

It's a good idea at this point to [backup the etcd data](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur:
It's a good idea at this point to [backup the etcd data](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur:

```
$ etcdctl snapshot save backup.db
Expand Down
Loading