Skip to content
This repository has been archived by the owner on Feb 12, 2021. It is now read-only.

etcd: configuring etcd-member by hand. #1074

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

pop
Copy link
Contributor

@pop pop commented May 15, 2017

This adds a 'manually running etcd v3' guide using the etcd-member service. Hopefully we can link to this when it's not appropriate or possible to use Container Linux configs or ignition configs.

This may be lacking some information, please let me know if the scope should be increased.

Refs coreos/bugs#1877 , coreos/bugs#1838 , coreos/bugs#1963, and probably a few more issues.

@pop pop self-assigned this May 15, 2017
@pop pop requested review from joshix and radhikapc May 15, 2017 23:57

The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!

**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set up (verb) setup(noun)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before we begin: If you are .. Use this guide only if you must set up etcd the hard way.


*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*

1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Edit the file appropriately and save it.
  2. Run systemctl daemon-reload.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To create a cluster of more than 2 nodes, set size=#, where # is the number of nodes you wish to create. If not set, extra nodes will become poxies.

(note ndoes > nodes, and this paragraph doesn't have to be in italics.)

Copy link
Contributor

@zbwright zbwright May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and I don't think you should use numbers with so few steps. words suffice. It looks to me like this should be (correct me if I'm misunderstanding, please):

Save your changes, then run systemctl daemon-reload.

Repeat for your second node. After editing, it should look like this:

Copy link
Contributor

@radhikapc radhikapc May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rule that I would follow is using a procedure for more than 2 steps. When I tested, there were multiple steps involved.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a procedure has more than 1 step I like to make it a list. Lists are easier to read than paragraphs.

*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*

## Verification

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we split this into steps ?

  1. Verify that the services.....
  2. On both nodes, run..
  3. If this command hangs...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that the arguments used in this configuration file are the same as those passed to the etcd binary when starting a cluster. For more information on help and sanity checks, see the [etcd clustering guide].

true
```

There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set up (verb) or configured


Since etcd is running in a container, the second option is very easy.

Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
Copy link
Contributor

@radhikapc radhikapc May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run the following commands on the Container Linux nodes:

  1. Stop the etcd-member...
  2. Delete the etcd data
  3. Edit the etcd-member service.
  4. Restart the etcd-member service

Docs: https://github.com/coreos/etcd
```

Next, delete the etcd data (again, run on the Container Linux nodes):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Delete the etcd data...


*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*

Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Edit the etcd-member service.
  2. Restart the etcd-member service.


The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!

**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
Copy link
Contributor

@lucab lucab May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target docpage does not mention ignition and I think we try to not point casual end-users to it. I'd only say to use Container Linux Config snippets.


## Troubleshooting

In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this missing a starting "If"?

Copy link
Contributor

@radhikapc radhikapc May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you encounter an issue while setting up your etcd cluster, do either of the following:

  • Follow the instructions given at the [runtime configuration guide][runtime-guide]
  • Reset your environment (but how - any useful links?)

On your local machine, you should be able to run etcdctl commands which talk to this etcd cluster.

```sh
$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" cluster-health
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to remember the etcdctl we ship is an older v2 version, am I wrong? Does it need an explicit env flag to activate v3? Should this go via rkt enter instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lucab The purpose of this is just to verify that etcd works. I was assuming you could run this command on your personal OS and not on the nodes. Should we clarify that? I don't want to bring rkt into the mix on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry, I missed the On your local machine part.

@@ -0,0 +1,127 @@
# Setting up etcd v3 on Container Linux "by hand"
Copy link
Contributor

@radhikapc radhikapc May 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Configuring etcd v3 Manually on Container Linux
Setting up etcd v3 on Container Linux Manually ?
Manual Configuration of etcd3 on Container Linux ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the third one is the best one.

| 0 | 192.168.100.100 | my-etcd-0 |
| 1 | 192.168.100.101 | my-etcd-1 |

First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this. But i was confused when the editor threw a blank file. We should somewhere mention that

  1. Enter into the node. For example: vagrant ssh core-01 -- -A
  2. Run sudo systemctl edit etcd-member.
    Paste the following code into the empty file that's opened (or something like that)

Environment="ETCD_IMAGE_TAG=v3.1.7"
Environment="ETCD_OPTS=\
--name=\"my-etcd-1\" \
--listen-client-urls=\"http://192.168.100.101:2379\" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we state what 2379 and 2380 stand for ? are these replaceable ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later we mention that the IPs are what should be replace and make no mention of the ports.

Honestly you should be familiar with what etcd does, what ports it uses, etc. You should be an intermediate etcd admin to use this guide imo.

| `my-etcd-1` | The other node's name. |
| `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |

*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where do we edit size=#. Do mention the file name

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

..do we need to italicize the paragraph ? Instead, we could use
note notation > paragraph....


You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.

On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sudo systemctl enable etcd-member && sudo systemctl start etcd-member

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ps: Unfortunately, this command hung on my test machine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verify that the services have been configured by ... This will print the service and its ...

--initial-cluster-state=\"new\""
```

*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previous comment about adding a note applicable here as well.

> type your note here....


In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:

1. Reference the [runtime configuration guide][runtime-guide].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a procedure or an option ? If option please use an itemized list instead of ordered list.

@@ -30,6 +30,8 @@ etcd:
initial_cluster_state: new
```

If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are unable to provision your machine using Container Linux configs, refer to....

@@ -0,0 +1,127 @@
# Setting up etcd v3 on Container Linux "by hand"

The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no exclamation points is the guideline.

@@ -1,10 +1,12 @@
# Setting up etcd v3 on Container Linux "by hand"
# Manual Configuration of etcd3 on Container Linux
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manual configuration - sentence case. :)


**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
f **Before we begin** if you are able to use Container Linux Configs [to provision your Container Linux nodes][easier-setup], you should go that route. Use this guide only if you must set up etcd the *hard* way.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove leading 'f', colonize 'begin' and capitalize If

This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
This tutorial outlines how to set up the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.

It is expected that you have some familiarity with etcd operations before entering this guide and have at least skimmed the [etcd clustering guide][etcd-clustering] first.

We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

virtual machines - no caps.


```ini
[Service]
Environment="ETCD_IMAGE_TAG=v3.1.7"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: even by hand this often shouldn't be set so that usr partition updates can still update etcd's version.

maybe comment on it being overridable if needed.

If the last command hangs for a very long time (10+ minutes), press <Ctrl>+c on your keyboard to exit the commadn and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:

```sh
$ rm -rf /var/lib/etcd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe 'stop' etcd-member first for safety?

@zbwright
Copy link
Contributor

zbwright commented Jun 6, 2017

assigning to @joshix - Elijah has moved on to the great rainy north, and can no longer manage this PR. Josh will know what to do with it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants