Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when restart metad cluster without transfer meta leader #423

Closed
jinyingsunny opened this issue Jan 24, 2024 · 3 comments
Closed

when restart metad cluster without transfer meta leader #423

jinyingsunny opened this issue Jan 24, 2024 · 3 comments
Assignees
Labels
affects/master PR/issue: this bug affects master version. process/done Process of bug severity/major Severity of bug type/bug Type: something is unexpected

Comments

@jinyingsunny
Copy link

as title.when 3 metad restart one after another, meta leader keep on metad1.
during the restart time, some action ,which need to contact with meta leader ,may be failed.

# 获取时间戳
$ date -u +%s
1706083832 

# 覆盖初始时间戳触发meta组件滚动重启
$ kubectl -n nebula annotate sts nebula2-metad nebula-graph.io/restart-timestamp="1706083832" --overwrite 


# 观察重启事件 $ kubectl -n nebula get pod
nebula2-console 1/1 Running 0 42m
nebula2-exporter-5d5d6f5455-7r842 1/1 Running 0 58m
nebula2-graphd-0 1/1 Running 0 41m
nebula2-graphd-1 1/1 Running 0 41m
nebula2-graphd-2 1/1 Running 0 42m
nebula2-metad-0 1/1 Running 0 58m
nebula2-metad-1 0/1 ContainerCreating 0 0s
nebula2-metad-2 1/1 Running 0 42s
nebula2-storaged-0 1/1 Running 0 9m59s
nebula2-storaged-1 1/1 Running 0 7m31s
nebula2-storaged-2 1/1 Running 0 10m

image

image

Your Environments (required)

operator:snap-1.30

Expected behavior

before restart meta leader pod , please transfer meta leader to another metad pod.

@jinyingsunny jinyingsunny added severity/major Severity of bug type/bug Type: something is unexpected affects/master PR/issue: this bug affects master version. labels Jan 24, 2024
@MegaByte875
Copy link
Contributor

Metad not provide tranfer leader interface.

@wenhaocs
Copy link

Agree. If there is no ongoing schema change, graphd and storaged will only send heartbeat to metad, which can endure failovers for some time. There are only 3 metad for any cluster, and the restarting process should be quick. Further, chances of restarting metad is very rare. I suggest to keep what it is for now.

@jinyingsunny
Copy link
Author

ok, mark as known issue.

@github-actions github-actions bot added the process/fixed Process of bug label Feb 25, 2024
@jinyingsunny jinyingsunny added process/done Process of bug and removed process/fixed Process of bug labels Feb 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects/master PR/issue: this bug affects master version. process/done Process of bug severity/major Severity of bug type/bug Type: something is unexpected
Projects
None yet
Development

No branches or pull requests

3 participants