You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some issues deploying it on minikube/kind. The pod for the cluster gets created and after some time the operator fails with:
{"level":"error","ts":1684787298.9081988,"msg":"Failed to update lock: resource name may not be empty\n","stacktrace":"k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1.1\n\tk8s.io/client-go@v0.23.2/tools/leaderelection/leaderelection.go:272\nk8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:220\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.poll\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:580\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:545\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntil\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:536\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1\n\tk8s.io/client-go@v0.23.2/tools/leaderelection/leaderelection.go:271\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tk8s.io/apimachinery@v0.23.2/pkg/util/wait/wait.go:90\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).renew\n\tk8s.io/client-go@v0.23.2/tools/leaderelection/leaderelection.go:268\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\tk8s.io/client-go@v0.23.2/tools/leaderelection/leaderelection.go:212\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\tsigs.k8s.io/controller-runtime@v0.11.0/pkg/manager/internal.go:642"}
{"level":"info","ts":1684787298.908323,"msg":"failed to renew lease default/couchbase-operator: timed out waiting for the condition\n"}
{"level":"error","ts":1684787298.908393,"logger":"main","msg":"Error starting resource manager","error":"leader election lost","stacktrace":"main.main\n\tgithub.com/ ...
I guess it's the fault of the kube-scheduler. I haven't figured it out yet. After the operator is up again I receive the following error message:
ERR ts=1684780626.1920617 logger=cluster msg=Failed to update members cluster=saferwall/couchbase-cluster error=unexpected status code: request failed GET http://couchbase-cluster-0000.couchbase-cluster.saferwall.svc:8091/pools/default 404 Object Not Found: "unknown pool" stacktrace=github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile
If I understand it correctly, there is no pool yet. The pool gets created together with the cluster. If I create the cluster manually there is a default pool. But the UUIDs do not match anymore. Curl output from couchbase-cluster-0000:
Couchbase operator version:
I have some issues deploying it on minikube/kind. The pod for the cluster gets created and after some time the operator fails with:
I guess it's the fault of the kube-scheduler. I haven't figured it out yet. After the operator is up again I receive the following error message:
If I understand it correctly, there is no pool yet. The pool gets created together with the cluster. If I create the cluster manually there is a default pool. But the UUIDs do not match anymore. Curl output from couchbase-cluster-0000:
Is it possible to recover safely from the first failure? If you need more info please ping me.
The text was updated successfully, but these errors were encountered: