Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors and warnings renaming Node ID [1.7.2] #7692

Closed
archekb opened this issue Apr 23, 2020 · 1 comment
Closed

Errors and warnings renaming Node ID [1.7.2] #7692

archekb opened this issue Apr 23, 2020 · 1 comment

Comments

@archekb
Copy link

archekb commented Apr 23, 2020

Overview of the Issue

I have consul cluster with 3 ipv6 nodes. Yesterday I was update cluster by restart container with new consul version. After update have many errors and warnings in log about renaming Node ID thats issue correlating with #4741.

Now I see consul members:

Node Address Status Type Build Protocol DC Segment
s0.dev.example.com [xxxx:xxx:xxx:xxx::2]:8301 alive server 1.7.2 2 dc1
s1.dev.example.com [xxxx:xxx:xxx:yyy::2]:8301 alive server 1.7.2 2 dc1
s2.dev.example.com [xxxx:xxx:xxx:zz::2]:8301 alive server 1.7.2 2 dc1
e0.dev.example.com [xxxx:xxx:xxx:aaa::2]:8301 alive client 1.7.1 2 dc1
e1.dev.example.com [xxxx:xxx:xxx:bbb::2]:8301 alive client 1.7.1 2 dc1
test0 172.16.147.16:8301 leaving client 1.7.2 2 dc1
test1 [xxxx:xxx:xxx:ccc::2]:8301 alive client 1.7.2 2 dc1

Cluster is OK

Reproduction Steps

Steps to reproduce this issue, eg:

  1. Create a cluster with 3 server ipv6 nodes.
  2. Restart one node with new consul version, wait while cluster synced.
  3. Repeat previous step 2 times.

Consul info for both Client and Server

Server info

./consul info
agent:
check_monitors = 0
check_ttls = 0
checks = 0
services = 0
build:
prerelease =
revision = 9ea1a20
version = 1.7.2
consul:
acl = disabled
bootstrap = false
known_datacenters = 1
leader = true
leader_addr = [xxxx:xxx:xxx:yyy::2]:8300
server = true
raft:
applied_index = 425297
commit_index = 425297
fsm_pending = 0
last_contact = 0
last_log_index = 425297
last_log_term = 117
last_snapshot_index = 409679
last_snapshot_term = 117
latest_configuration = [{Suffrage:Voter ID:26365a1c-28c7-cd87-c604-2eb8faf78f81 Address:[xxxx:xxx:xxx:xxx::2]:8300} {Suffrage:Voter ID:4188bade-2a7b-ff34-90e4-ade73cf7c052 Address:[xxxx:xxx:xxx:yyy::2]:8300} {Suffrage:Voter ID:2fae1f7b-b49b-0749-6f9c-38a4db60111d Address:[xxxx:xxx:xxx:zz::2]:8300}]
latest_configuration_index = 0
num_peers = 2
protocol_version = 3
protocol_version_max = 3
protocol_version_min = 0
snapshot_version_max = 1
snapshot_version_min = 0
state = Leader
term = 117
runtime:
arch = amd64
cpu_count = 4
goroutines = 122
max_procs = 4
os = linux
version = go1.13.7
serf_lan:
coordinate_resets = 0
encrypted = true
event_queue = 0
event_time = 55
failed = 0
health_score = 0
intent_queue = 4969
left = 0
member_time = 255
members = 7
query_queue = 0
query_time = 21
serf_wan:
coordinate_resets = 0
encrypted = true
event_queue = 0
event_time = 1
failed = 0
health_score = 0
intent_queue = 0
left = 0
member_time = 130
members = 3
query_queue = 0
query_time = 1

Server config

{
"bootstrap": false,
"server": true,
"bind_addr": "xxxx:xxx:xxx:xxx::2",
"client_addr": "xxxx:xxx:xxx:xxx::2",
"datacenter": "DC1",
"encrypt": "=======encrypted key here===========",
"data_dir": "/tmp/consul",
"enable_local_script_checks": false,
"log_level": "INFO",
"retry_join": ["xxxx:xxx:xxx:xxx::2", "xxxx:xxx:xxx:yyy::2", "xxxx:xxx:xxx:zz::2"],
"ui": true,
"leave_on_terminate": false,
"disable_update_check": true,
"disable_host_node_id": true,
"skip_leave_on_interrupt": false,
"reconnect_timeout": "8h"
}

Operating system and Environment details

Linux s1.dev.example.com 4.15.0-64-generic #73-Ubuntu SMP Thu Sep 12 13:16:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Docker version 19.03.6, build 369ce74a3c

Log Fragments

Server error

consul_1 | 2020-04-23T06:14:17.357Z [WARN] agent: Syncing node info failed.: error="rpc error making call: failed inserting node: Error while renaming Node ID: "2fae1f7b-b49b-0749-6f9c-38a4db60111d": Node name s2.dev.example.com is reserved by node 6c1aec5a-19e4-431b-2348-8413ec6d5ed7 with name s2.dev.example.com (xxxx:xxx:xxx:zz::2)"
consul_1 | 2020-04-23T06:14:17.357Z [ERROR] agent.anti_entropy: failed to sync remote state: error="rpc error making call: failed inserting node: Error while renaming Node ID: "2fae1f7b-b49b-0749-6f9c-38a4db60111d": Node name s2.dev.example.com is reserved by node 6c1aec5a-19e4-431b-2348-8413ec6d5ed7 with name s2.dev.example.com (xxxx:xxx:xxx:zz::2)"

@jsosulska
Copy link
Contributor

Hi @archekb

I believe this is a Duplicate of #7396. Closing to track in 7396.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants