diff --git a/content/en/docs/v3.4/benchmarks/etcd-2-2-0-rc-benchmarks.md b/content/en/docs/v3.4/benchmarks/etcd-2-2-0-rc-benchmarks.md index 9854c35d..0dcb21f1 100644 --- a/content/en/docs/v3.4/benchmarks/etcd-2-2-0-rc-benchmarks.md +++ b/content/en/docs/v3.4/benchmarks/etcd-2-2-0-rc-benchmarks.md @@ -73,6 +73,6 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req - write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster. [c7146bd5]: https://github.com/etcd-io/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144 -[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks +[etcd-2.1-benchmark]: ../etcd-2-1-0-alpha-benchmarks/ [hack-benchmark]: https://github.com/etcd-io/etcd/tree/v2.3.8/hack/benchmark [hey]: https://github.com/rakyll/hey diff --git a/content/en/docs/v3.4/benchmarks/etcd-3-demo-benchmarks.md b/content/en/docs/v3.4/benchmarks/etcd-3-demo-benchmarks.md index 40f5c994..3d182c66 100644 --- a/content/en/docs/v3.4/benchmarks/etcd-3-demo-benchmarks.md +++ b/content/en/docs/v3.4/benchmarks/etcd-3-demo-benchmarks.md @@ -6,4 +6,4 @@ description: Performance measures for etcd v3 See [etcd v3 performance benchmarking][etcd-v3-benchmark]. -[etcd-v3-benchmark]: ../op-guide/performance/#benchmarks +[etcd-v3-benchmark]: ../../op-guide/performance/#benchmarks diff --git a/content/en/docs/v3.4/branch_management.md b/content/en/docs/v3.4/branch_management.md index ba396a3e..94ac2694 100644 --- a/content/en/docs/v3.4/branch_management.md +++ b/content/en/docs/v3.4/branch_management.md @@ -19,12 +19,12 @@ The `master` branch is our development branch. All new features land here first. To try new and experimental features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs. -Before the release of the next stable version, feature PRs will be frozen. A [release manager](./dev-internal/release#release-management) will be assigned to major/minor version and will lead the etcd community in test, bug-fix and documentation of the release for one to two weeks. +Before the release of the next stable version, feature PRs will be frozen. A [release manager](../dev-internal/release/#release-management) will be assigned to major/minor version and will lead the etcd community in test, bug-fix and documentation of the release for one to two weeks. ### Stable branches All branches with prefix `release-` are considered _stable_ branches. -After every minor release ([semver.org](https://semver.org/)), we will have a new stable branch for that release, managed by a [patch release manager](./dev-internal/release#release-management). We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches. +After every minor release ([semver.org](https://semver.org/)), we will have a new stable branch for that release, managed by a [patch release manager](../dev-internal/release/#release-management). We will keep fixing the backwards-compatible bugs for the latest two stable releases. A _patch_ release to each supported release branch, incorporating any bug fixes, will be once every two weeks, given any patches. [master]: https://github.com/etcd-io/etcd/tree/master diff --git a/content/en/docs/v3.4/dev-guide/api_grpc_gateway.md b/content/en/docs/v3.4/dev-guide/api_grpc_gateway.md index 96e9d236..24dd5e9a 100644 --- a/content/en/docs/v3.4/dev-guide/api_grpc_gateway.md +++ b/content/en/docs/v3.4/dev-guide/api_grpc_gateway.md @@ -129,7 +129,7 @@ curl -L http://localhost:2379/v3/kv/put \ Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc]. -[api-ref]: ./api_reference_v3 +[api-ref]: ../api_reference_v3/ [etcdctl]: https://github.com/etcd-io/etcd/tree/master/etcdctl [go-client]: https://github.com/etcd-io/etcd/tree/master/client/v3 [grpc]: https://www.grpc.io/ diff --git a/content/en/docs/v3.4/dev-guide/local_cluster.md b/content/en/docs/v3.4/dev-guide/local_cluster.md index f4540773..e6072392 100644 --- a/content/en/docs/v3.4/dev-guide/local_cluster.md +++ b/content/en/docs/v3.4/dev-guide/local_cluster.md @@ -149,5 +149,5 @@ To exercise etcd's fault tolerance, kill a member and attempt to retrieve the ke Restarting the member re-establish the connection. `etcdctl` will now be able to retrieve the key successfully. To learn more about interacting with etcd, read [interacting with etcd section][interacting]. -[clustering]: ../op-guide/clustering -[interacting]: ./interacting_v3 +[clustering]: ../../op-guide/clustering/ +[interacting]: ../interacting_v3/ diff --git a/content/en/docs/v3.4/faq.md b/content/en/docs/v3.4/faq.md index 2c544137..95ba0785 100644 --- a/content/en/docs/v3.4/faq.md +++ b/content/en/docs/v3.4/faq.md @@ -147,21 +147,21 @@ If none of the above suggestions clear the warnings, please [open an issue][new_ etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection. -[api-mvcc]: learning/api#revisions -[backend_commit_metrics]: ./metrics#disk +[api-mvcc]: ../learning/api/#revisions +[backend_commit_metrics]: ../metrics/#disk [backup]: /docs/v3.4/op-guide/recovery#snapshotting-the-keyspace [benchmark]: https://github.com/etcd-io/etcd/tree/master/tools/benchmark [benchmark-result]: /docs/v3.4/op-guide/performance/ [chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf [fio-blog-post]: https://www.ibm.com/blogs/bluemix/2019/04/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd/ [fio]: https://github.com/axboe/fio -[hardware-setup]: ./op-guide/hardware -[maintenance-compact]: op-guide/maintenance#history-compaction -[maintenance-defragment]: op-guide/maintenance#defragmentation +[hardware-setup]: ../op-guide/hardware/ +[maintenance-compact]: ../op-guide/maintenance/#history-compaction +[maintenance-defragment]: ../op-guide/maintenance/#defragmentation [maintenance-disarm]: https://github.com/etcd-io/etcd/tree/master/etcdctl#alarm-disarm [new_issue]: https://github.com/etcd-io/etcd/issues/new [raft]: https://raft.github.io/raft.pdf [runtime reconfiguration]: /docs/v3.4/op-guide/runtime-configuration/ -[supported-platform]: ./op-guide/supported-platform -[tuning]: ./tuning -[wal_fsync_duration_seconds]: ./metrics#disk +[supported-platform]: ../op-guide/supported-platform/ +[tuning]: ../tuning/ +[wal_fsync_duration_seconds]: ../metrics/#disk diff --git a/content/en/docs/v3.4/learning/api.md b/content/en/docs/v3.4/learning/api.md index 86361595..70676258 100644 --- a/content/en/docs/v3.4/learning/api.md +++ b/content/en/docs/v3.4/learning/api.md @@ -477,7 +477,7 @@ message LeaseKeepAliveResponse { * TTL - the new time-to-live, in seconds, that the lease has remaining. [elections]: https://github.com/etcd-io/etcd/blob/master/client/v3/concurrency/election.go -[grpc-api]: ../dev-guide/api_reference_v3 +[grpc-api]: ../../dev-guide/api_reference_v3/ [grpc-service]: https://github.com/etcd-io/etcd/blob/master/api/etcdserverpb/rpc.proto [kv-proto]: https://github.com/etcd-io/etcd/blob/{{< param git_version_tag >}}/mvcc/mvccpb/kv.proto [locks]: https://github.com/etcd-io/etcd/blob/master/client/v3/concurrency/mutex.go diff --git a/content/en/docs/v3.4/learning/api_guarantees.md b/content/en/docs/v3.4/learning/api_guarantees.md index 53166b91..54678ded 100644 --- a/content/en/docs/v3.4/learning/api_guarantees.md +++ b/content/en/docs/v3.4/learning/api_guarantees.md @@ -53,4 +53,4 @@ etcd ensures linearizability for all other operations by default. Linearizabilit [linearizability]: https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf [strict_serializability]: http://jepsen.io/consistency/models/strict-serializable -[txn]: api#transaction +[txn]: ../api/#transaction diff --git a/content/en/docs/v3.4/learning/why.md b/content/en/docs/v3.4/learning/why.md index 87e02247..9c41d05a 100644 --- a/content/en/docs/v3.4/learning/why.md +++ b/content/en/docs/v3.4/learning/why.md @@ -92,19 +92,19 @@ For distributed coordination, choosing etcd can help prevent operational headach [container-linux]: https://coreos.com/why [curator]: http://curator.apache.org/ [dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul -[etcd-commonname]: ../op-guide/authentication#using-tls-common-name +[etcd-commonname]: ../../op-guide/authentication/#using-tls-common-name [etcd-etcdctl-elect]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#elect-options-election-name-proposal [etcd-etcdctl-lock]: https://github.com/etcd-io/etcd/tree/master/etcdctl/README.md#lock-lockname-command-arg1-arg2- -[etcd-json]: ../dev-guide/api_grpc_gateway -[etcd-linread]: api_guarantees#isolation-level-and-consistency-of-replicas -[etcd-mvcc]: data_model -[etcd-rbac]: ../op-guide/authentication#working-with-roles +[etcd-json]: ../../dev-guide/api_grpc_gateway/ +[etcd-linread]: ../api_guarantees/#isolation-level-and-consistency-of-replicas +[etcd-mvcc]: ../data_model/ +[etcd-rbac]: ../../op-guide/authentication/#working-with-roles [etcd-recipe]: https://godoc.org/github.com/etcd-io/etcd/contrib/recipes -[etcd-reconfig]: ../op-guide/runtime-configuration -[etcd-txn]: api#transaction +[etcd-reconfig]: ../../op-guide/runtime-configuration/ +[etcd-txn]: ../api/#transaction [etcd-v3election]: https://godoc.org/github.com/coreos/etcd-io/etcdserver/api/v3election/v3electionpb [etcd-v3lock]: https://godoc.org/github.com/etcd-io/etcd/etcdserver/api/v3lock/v3lockpb -[etcd-watch]: api#watch-streams +[etcd-watch]: ../api/#watch-streams [grpc]: https://www.grpc.io [kubernetes]: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ [locksmith]: https://github.com/coreos/locksmith diff --git a/content/en/docs/v3.4/op-guide/clustering.md b/content/en/docs/v3.4/op-guide/clustering.md index 93110096..572e8463 100644 --- a/content/en/docs/v3.4/op-guide/clustering.md +++ b/content/en/docs/v3.4/op-guide/clustering.md @@ -487,14 +487,14 @@ When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mod To setup an etcd cluster with proxies of v2 API, please read the the [clustering doc in etcd 2.3 release][clustering_etcd2]. [clustering_etcd2]: https://github.com/etcd-io/etcd/blob/release-2.3/Documentation/clustering.md -[conf-adv-client]: configuration#--advertise-client-urls -[conf-listen-client]: configuration#--listen-client-urls -[discovery-proto]: ../dev-internal/discovery_protocol -[gateway]: gateway +[conf-adv-client]: ../configuration/#--advertise-client-urls +[conf-listen-client]: ../configuration/#--listen-client-urls +[discovery-proto]: ../../dev-internal/discovery_protocol/ +[gateway]: ../gateway/ [proxy]: https://github.com/etcd-io/etcd/blob/release-2.3/Documentation/proxy.md [rfc-srv]: http://www.ietf.org/rfc/rfc2052.txt -[runtime-conf]: runtime-configuration -[runtime-reconf-design]: runtime-reconf-design -[security-guide-dns-srv]: security#notes-for-dns-srv -[security-guide]: security +[runtime-conf]: ../runtime-configuration/ +[runtime-reconf-design]: ../runtime-reconf-design/ +[security-guide-dns-srv]: ../security/#notes-for-dns-srv +[security-guide]: ../security/ [tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup diff --git a/content/en/docs/v3.4/op-guide/configuration.md b/content/en/docs/v3.4/op-guide/configuration.md index aa7c9160..76877dc5 100644 --- a/content/en/docs/v3.4/op-guide/configuration.md +++ b/content/en/docs/v3.4/op-guide/configuration.md @@ -460,15 +460,15 @@ a private certificate authority using `--peer-cert-file`, `--peer-key-file`, `-- + default: false + env variable: ETCD_EXPERIMENTAL_PEER_SKIP_CLIENT_SAN_VERIFICATION -[build-cluster]: clustering#static -[discovery]: clustering#discovery +[build-cluster]: ../clustering/#static +[discovery]: ../clustering/#discovery [iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt -[proxy]: /docs/v2.3/proxy -[reconfig]: runtime-configuration -[recovery]: ../recovery -[restore]: /docs/v2.3/admin_guide#restoring-a-backup +[proxy]: /docs/v2.3/proxy/ +[reconfig]: ../runtime-configuration/ +[recovery]: ../recovery/ +[restore]: /docs/v2.3/admin_guide/#restoring-a-backup [sample-config-file]: https://github.com/etcd-io/etcd/blob/release-3.4/etcd.conf.yml.sample -[security]: ../security -[static bootstrap]: clustering#static +[security]: ../security/ +[static bootstrap]: /clustering/#static [systemd-intro]: http://freedesktop.org/wiki/Software/systemd/ -[tuning]: ../tuning#time-parameters +[tuning]: ../../tuning/#time-parameters diff --git a/content/en/docs/v3.4/op-guide/container.md b/content/en/docs/v3.4/op-guide/container.md index ed7b3225..b86263f9 100644 --- a/content/en/docs/v3.4/op-guide/container.md +++ b/content/en/docs/v3.4/op-guide/container.md @@ -4,7 +4,7 @@ weight: 4200 description: Running etcd with rkt and Docker using static bootstrapping --- -The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering#static). +The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](../clustering/#static). ## rkt diff --git a/content/en/docs/v3.4/op-guide/failures.md b/content/en/docs/v3.4/op-guide/failures.md index 577203c6..0c0fe873 100644 --- a/content/en/docs/v3.4/op-guide/failures.md +++ b/content/en/docs/v3.4/op-guide/failures.md @@ -44,5 +44,5 @@ A cluster bootstrap is only successful if all required members successfully star Of course, it is possible to recover a failed bootstrapped cluster like recovering a running cluster. However, it almost always takes more time and resources to recover that cluster than bootstrapping a new one, since there is no data to recover. -[backup]: maintenance#snapshot-backup -[unrecoverable]: recovery +[backup]: ../maintenance/#snapshot-backup +[unrecoverable]: ../recovery/ diff --git a/content/en/docs/v3.4/op-guide/grpc_proxy.md b/content/en/docs/v3.4/op-guide/grpc_proxy.md index ce743098..0a7b3a0b 100644 --- a/content/en/docs/v3.4/op-guide/grpc_proxy.md +++ b/content/en/docs/v3.4/op-guide/grpc_proxy.md @@ -103,7 +103,7 @@ bar ## Client endpoint synchronization and name resolution -The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../dev-guide/grpc_naming.md). +The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../../dev-guide/grpc_naming/). Register proxy(s) by providing a user-defined prefix: diff --git a/content/en/docs/v3.4/op-guide/hardware.md b/content/en/docs/v3.4/op-guide/hardware.md index 7c3ee182..c4e40e4b 100644 --- a/content/en/docs/v3.4/op-guide/hardware.md +++ b/content/en/docs/v3.4/op-guide/hardware.md @@ -94,4 +94,4 @@ Example application workload: A 3,000 node Kubernetes cluster [diskbench]: https://github.com/ongardie/diskbenchmark [fio]: https://github.com/axboe/fio [fio-blog-post]: https://www.ibm.com/cloud/blog/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd -[tuning]: ../tuning +[tuning]: ../../tuning/ diff --git a/content/en/docs/v3.4/op-guide/runtime-configuration.md b/content/en/docs/v3.4/op-guide/runtime-configuration.md index 77cd5c61..88222f13 100644 --- a/content/en/docs/v3.4/op-guide/runtime-configuration.md +++ b/content/en/docs/v3.4/op-guide/runtime-configuration.md @@ -234,15 +234,15 @@ It is enabled by default. [add member]: #add-a-new-member [cluster-reconf]: #cluster-reconfiguration-operations -[conf-adv-peer]: configuration#--initial-advertise-peer-urls -[conf-name]: configuration#--name -[design-learner]: ../learning/design-learner -[disaster recovery]: recovery +[conf-adv-peer]: ../configuration/#--initial-advertise-peer-urls +[conf-name]: ../configuration/#--name +[design-learner]: ../../learning/design-learner +[disaster recovery]: ../recovery/ [error cases when promoting a member]: #error-cases-when-promoting-a-learner-member [fault tolerance table]: /docs/v2.3/admin_guide#fault-tolerance-table [majority failure]: #restart-cluster-from-majority-failure -[member migration]: /docs/v2.3/admin_guide#member-migration +[member migration]: /docs/v2.3/admin_guide/#member-migration [member-api]: /docs/v2.3/members_api -[member-api-grpc]: ../dev-guide/api_reference_v3#service-cluster-etcdserveretcdserverpbrpcproto +[member-api-grpc]: ../../dev-guide/api_reference_v3/#service-cluster-etcdserveretcdserverpbrpcproto [remove member]: #remove-a-member -[runtime-reconf]: runtime-reconf-design +[runtime-reconf]: ../runtime-reconf-design/ diff --git a/content/en/docs/v3.4/op-guide/runtime-reconf-design.md b/content/en/docs/v3.4/op-guide/runtime-reconf-design.md index dd89a97d..7dd4d8e7 100644 --- a/content/en/docs/v3.4/op-guide/runtime-reconf-design.md +++ b/content/en/docs/v3.4/op-guide/runtime-reconf-design.md @@ -50,5 +50,5 @@ It seems that using public discovery service is a convenient way to do runtime r To have a discovery service that supports runtime reconfiguration, the best choice is to build a private one. -[add-member]: runtime-configuration#add-a-new-member -[disaster-recovery]: recovery +[add-member]: ../runtime-configuration/#add-a-new-member +[disaster-recovery]: ../recovery/ diff --git a/content/en/docs/v3.4/op-guide/security.md b/content/en/docs/v3.4/op-guide/security.md index f913b8a8..2b251006 100644 --- a/content/en/docs/v3.4/op-guide/security.md +++ b/content/en/docs/v3.4/op-guide/security.md @@ -429,7 +429,7 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too. [alt-name]: http://wiki.cacert.org/FAQ/subjectAltName -[auth]: authentication +[auth]: ../authentication/ [cfssl]: https://github.com/cloudflare/cfssl [tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md [tls-setup]: https://github.com/etcd-io/etcd/tree/master/hack/tls-setup diff --git a/content/en/docs/v3.4/platforms/aws.md b/content/en/docs/v3.4/platforms/aws.md index 578896a5..ea36bb08 100644 --- a/content/en/docs/v3.4/platforms/aws.md +++ b/content/en/docs/v3.4/platforms/aws.md @@ -10,12 +10,12 @@ This guide assumes operational knowledge of Amazon Web Services (AWS), specifica As a critical building block for distributed systems it is crucial to perform adequate capacity planning in order to support the intended cluster workload. As a highly available and strongly consistent data store increasing the number of nodes in an etcd cluster will generally affect performance adversely. This makes sense intuitively, as more nodes means more members for the leader to coordinate state across. The most direct way to increase throughput and decrease latency of an etcd cluster is allocate more disk I/O, network I/O, CPU, and memory to cluster members. In the event it is impossible to temporarily divert incoming requests to the cluster, scaling the EC2 instances which comprise the etcd cluster members one at a time may improve performance. It is, however, best to avoid bottlenecks through capacity planning. -The etcd team has produced a [hardware recommendation guide](../op-guide/hardware.md) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster. +The etcd team has produced a [hardware recommendation guide](../../op-guide/hardware/) which is very useful for “ballparking” how many nodes and what instance type are necessary for a cluster. AWS provides a service for creating groups of EC2 instances which are dynamically sized to match load on the instances. Using an Auto Scaling Group ([ASG](http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html)) to dynamically scale an etcd cluster is not recommended for several reasons including: * etcd performance is generally inversely proportional to the number of members in a cluster due to the synchronous replication which provides strong consistency of data stored in etcd -* the operational complexity of adding [lifecycle hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to properly add and remove members from an etcd cluster by modifying the [runtime configuration](../op-guide/runtime-configuration.md) +* the operational complexity of adding [lifecycle hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to properly add and remove members from an etcd cluster by modifying the [runtime configuration](../../op-guide/runtime-configuration/) Auto Scaling Groups do provide a number of benefits besides cluster scaling which include: @@ -60,7 +60,7 @@ A highly available etcd cluster is resilient to member loss, however, it is impo ### Performance/Throughput -The performance of an etcd cluster is roughly quantifiable through latency and throughput metrics which are primarily affected by disk and network performance. Detailed performance planning information is provided in the [performance section](../op-guide/performance.md) of the etcd operations guide. +The performance of an etcd cluster is roughly quantifiable through latency and throughput metrics which are primarily affected by disk and network performance. Detailed performance planning information is provided in the [performance section](../../op-guide/performance/) of the etcd operations guide. #### Network diff --git a/content/en/docs/v3.4/upgrades/upgrade_3_1.md b/content/en/docs/v3.4/upgrades/upgrade_3_1.md index 71a7f2e8..fe8d7b80 100644 --- a/content/en/docs/v3.4/upgrades/upgrade_3_1.md +++ b/content/en/docs/v3.4/upgrades/upgrade_3_1.md @@ -33,7 +33,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). +Before beginning, [backup the etcd data](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). #### Mixed versions @@ -51,7 +51,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.1, the cluster will be upgraded to v3.1, and downgrade from this completed state is **not possible**. If any single member is still v3.0, however, the cluster and its operations remains "v3.0", and it is possible from this mixed cluster state to return to using a v3.0 etcd binary on all members. -Please [backup the data directory](../op-guide/maintenance#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. +Please [backup the data directory](../../op-guide/maintenance/#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -84,7 +84,7 @@ When each etcd process is stopped, expected errors will be logged by other clust 2017-01-17 09:34:34.364907 W | etcdserver: failed to reach the peerURL(http://localhost:2380) of member fd32987dcd0511e0 (Get http://localhost:2380/version: dial tcp 127.0.0.1:2380: getsockopt: connection refused) ``` -It's a good idea at this point to [backup the etcd data](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur: +It's a good idea at this point to [backup the etcd data](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur: ``` $ etcdctl snapshot save backup.db diff --git a/content/en/docs/v3.4/upgrades/upgrade_3_2.md b/content/en/docs/v3.4/upgrades/upgrade_3_2.md index 8bb3bb40..5008861a 100644 --- a/content/en/docs/v3.4/upgrades/upgrade_3_2.md +++ b/content/en/docs/v3.4/upgrades/upgrade_3_2.md @@ -230,7 +230,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). +Before beginning, [backup the etcd data](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide/#backing-up-the-datastore). #### Mixed versions @@ -248,7 +248,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.2, the cluster will be upgraded to v3.2, and downgrade from this completed state is **not possible**. If any single member is still v3.1, however, the cluster and its operations remains "v3.1", and it is possible from this mixed cluster state to return to using a v3.1 etcd binary on all members. -Please [backup the data directory](../op-guide/maintenance#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. +Please [backup the data directory](../../op-guide/maintenance/#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -290,7 +290,7 @@ When each etcd process is stopped, expected errors will be logged by other clust 2017-04-27 14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer) ``` -It's a good idea at this point to [backup the etcd data](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur: +It's a good idea at this point to [backup the etcd data](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur: ``` $ etcdctl snapshot save backup.db diff --git a/content/en/docs/v3.4/upgrades/upgrade_3_3.md b/content/en/docs/v3.4/upgrades/upgrade_3_3.md index e968876b..d05e826e 100644 --- a/content/en/docs/v3.4/upgrades/upgrade_3_3.md +++ b/content/en/docs/v3.4/upgrades/upgrade_3_3.md @@ -61,7 +61,7 @@ func (e *EtcdServer) Start() error { #### Added `embed.Config.LogOutput` struct -**Note that this field has been renamed to `embed.Config.LogOutputs` in `[]string` type in v3.4. Please see [v3.4 upgrade guide](/docs/v3.4/upgrades/upgrade_3_4/) for more details.** +**Note that this field has been renamed to `embed.Config.LogOutputs` in `[]string` type in v3.4. Please see [v3.4 upgrade guide](../upgrade_3_4/) for more details.** Field `LogOutput` is added to `embed.Config`: @@ -421,7 +421,7 @@ Please see [CHANGELOG](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3 #### Upgrade requirements -To upgrade an existing etcd deployment to 3.3, the running cluster must be 3.2 or greater. If it's before 3.2, please [upgrade to 3.2](upgrade_3_2.md) before upgrading to 3.3. +To upgrade an existing etcd deployment to 3.3, the running cluster must be 3.2 or greater. If it's before 3.2, please [upgrade to 3.2](../upgrade_3_2/) before upgrading to 3.3. Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding. @@ -429,7 +429,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [backup the etcd data](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). +Before beginning, [backup the etcd data](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide/#backing-up-the-datastore). #### Mixed versions @@ -447,7 +447,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.3, the cluster will be upgraded to v3.3, and downgrade from this completed state is **not possible**. If any single member is still v3.2, however, the cluster and its operations remains "v3.2", and it is possible from this mixed cluster state to return to using a v3.2 etcd binary on all members. -Please [backup the data directory](../op-guide/maintenance#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. +Please [backup the data directory](../../op-guide/maintenance/#snapshot-backup) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -489,7 +489,7 @@ When each etcd process is stopped, expected errors will be logged by other clust 14:13:31.936678 W | rafthttp: lost the TCP streaming connection with peer 6d4f535bae3ab960 (stream Message writer) ``` -It's a good idea at this point to [backup the etcd data](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur: +It's a good idea at this point to [backup the etcd data](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur: ``` $ etcdctl snapshot save backup.db diff --git a/content/en/docs/v3.4/upgrades/upgrade_3_4.md b/content/en/docs/v3.4/upgrades/upgrade_3_4.md index 070a8474..74d1c92f 100644 --- a/content/en/docs/v3.4/upgrades/upgrade_3_4.md +++ b/content/en/docs/v3.4/upgrades/upgrade_3_4.md @@ -365,7 +365,7 @@ Requests to `/v3beta` endpoints will redirect to `/v3`, and `/v3beta` will be re #### Upgrade requirements -To upgrade an existing etcd deployment to 3.4, the running cluster must be 3.3 or greater. If it's before 3.3, please [upgrade to 3.3](upgrade_3_3) before upgrading to 3.4. +To upgrade an existing etcd deployment to 3.4, the running cluster must be 3.3 or greater. If it's before 3.3, please [upgrade to 3.3](../upgrade_3_3/) before upgrading to 3.4. Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding. @@ -373,7 +373,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [download the snapshot backup](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). +Before beginning, [download the snapshot backup](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide/#backing-up-the-datastore). #### Mixed versions @@ -391,7 +391,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.4, the cluster will be upgraded to v3.4, and downgrade from this completed state is **not possible**. If any single member is still v3.3, however, the cluster and its operations remains "v3.3", and it is possible from this mixed cluster state to return to using a v3.3 etcd binary on all members. -Please [download the snapshot backup](../op-guide/maintenance#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded. +Please [download the snapshot backup](../../op-guide/maintenance/#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -427,7 +427,7 @@ COMMENT #### Step 2: download snapshot backup from leader -[Download the snapshot backup](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur. +[Download the snapshot backup](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur. etcd leader is guaranteed to have the latest application data, thus fetch snapshot from leader: diff --git a/content/en/docs/v3.4/upgrades/upgrade_3_5.md b/content/en/docs/v3.4/upgrades/upgrade_3_5.md index f698b27c..9316a9ad 100644 --- a/content/en/docs/v3.4/upgrades/upgrade_3_5.md +++ b/content/en/docs/v3.4/upgrades/upgrade_3_5.md @@ -153,7 +153,7 @@ curl -L http://localhost:2379/v3/kv/put \ #### Upgrade requirements -To upgrade an existing etcd deployment to 3.5, the running cluster must be 3.4 or greater. If it's before 3.4, please [upgrade to 3.4](upgrade_3_3.md) before upgrading to 3.5. +To upgrade an existing etcd deployment to 3.5, the running cluster must be 3.4 or greater. If it's before 3.4, please [upgrade to 3.4](../upgrade_3_3/) before upgrading to 3.5. Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding. @@ -161,7 +161,7 @@ Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. C Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. -Before beginning, [download the snapshot backup](../op-guide/maintenance#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). +Before beginning, [download the snapshot backup](../../op-guide/maintenance/#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](/docs/v2.3/admin_guide#backing-up-the-datastore). #### Mixed versions @@ -179,7 +179,7 @@ For a much larger total data size, 100MB or more , this one-time process might t If all members have been upgraded to v3.5, the cluster will be upgraded to v3.5, and downgrade from this completed state is **not possible**. If any single member is still v3.4, however, the cluster and its operations remains "v3.4", and it is possible from this mixed cluster state to return to using a v3.4 etcd binary on all members. -Please [download the snapshot backup](../op-guide/maintenance#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded. +Please [download the snapshot backup](../../op-guide/maintenance/#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded. ### Upgrade procedure @@ -215,7 +215,7 @@ COMMENT #### Step 2: download snapshot backup from leader -[Download the snapshot backup](../op-guide/maintenance#snapshot-backup) to provide a downgrade path should any problems occur. +[Download the snapshot backup](../../op-guide/maintenance/#snapshot-backup) to provide a downgrade path should any problems occur. etcd leader is guaranteed to have the latest application data, thus fetch snapshot from leader: diff --git a/content/en/docs/v3.4/upgrades/upgrading-etcd.md b/content/en/docs/v3.4/upgrades/upgrading-etcd.md index 0d712ba9..b9df045b 100644 --- a/content/en/docs/v3.4/upgrades/upgrading-etcd.md +++ b/content/en/docs/v3.4/upgrades/upgrading-etcd.md @@ -11,13 +11,13 @@ This section contains documents specific to upgrading etcd clusters and applicat * [Migrate applications from using API v2 to API v3][migrate-apps] ## Upgrading an etcd v3.x cluster -* [Upgrade etcd from 3.0 to 3.1](upgrade_3_1) -* [Upgrade etcd from 3.1 to 3.2](upgrade_3_2) -* [Upgrade etcd from 3.2 to 3.3](upgrade_3_3) -* [Upgrade etcd from 3.3 to 3.4](upgrade_3_4) +* [Upgrade etcd from 3.0 to 3.1](../upgrade_3_1/) +* [Upgrade etcd from 3.1 to 3.2](../upgrade_3_2/) +* [Upgrade etcd from 3.2 to 3.3](../upgrade_3_3/) +* [Upgrade etcd from 3.3 to 3.4](../upgrade_3_4/) ## Upgrading from etcd v2.3 -* [Upgrade a v2.3 cluster to v3.0](upgrade_3_0) +* [Upgrade a v2.3 cluster to v3.0](../upgrade_3_0/) -[migrate-apps]: ../op-guide/v2-migration +[migrate-apps]: ../../op-guide/v2-migration/