-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/flags: conflicting environment variable "ETCDCTL_ENDPOINTS" is shadowed by corresponding command-line flag (either unset environment variable or disable flag) #14640
Comments
This might be a known issue, which was fixed in 3.4.21 and 3.5.5. Please refer to #14434 Please let me know whether you can still reproduce this issue in 3.4.21 or 3.5.5 or the Not: 3.3 is out of support. |
@ahrtr I am not seeing the solution, that why raised this issue. I have already gone through with existing issue raised in this portal. Can you please paste the actual solution here, instead of closing it. |
It's an issue, you need to upgrade to 3.4.21 or 3.5.5. @tjungblu |
@ahrtr you mean to say that I need to upgrade existing cluster to version 3.4.21 instead of using dockerfile where etcd version is 3.5.5. |
While it looks like the fix in #14434, it's simply that you're passing both ENV variables and CMDLINE arguments in your pod template:
As the error states, just use one or the other, but both will not work. |
Also make sure both your |
Thanks @ahrtr and @tjungblu , it worked after deleting variable ETCDCTL_ENDPOINTS from args and using etcdctl version 3.5.5.
|
What happened?
Hi Team,
I am trying to create cronjob to take k8 cluster backup and using below approach. Also, cluster already exists in my project. But after applying Cronjob to existing cluster, I am getting error "pkg/flags: conflicting environment variable "ETCDCTL_ENDPOINTS" is shadowed by corresponding command-line flag (either unset environment variable or disable flag)" in the container. However, I have not set this variable specially in existing cluster.
Dockerfile:
FROM alpine:latest
ARG ETCD_VERSION=v3.5.5
ENV ETCDCTL_CACERT "/etc/kubernetes/pki/etcd/ca.crt"
ENV ETCDCTL_KEY "/etc/kubernetes/pki/etcd/client.key"
ENV ETCDCTL_CERT "/etc/kubernetes/pki/etcd/client.crt"
RUN apk add --update --no-cache bash ca-certificates tzdata openssl
RUN wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
&& tar xzf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
&& mv etcd-${ETCD_VERSION}-linux-amd64/etcdctl /usr/local/bin/etcdctl
&& rm -rf etcd-${ETCD_VERSION}-linux-amd64*
ENTRYPOINT ["/bin/bash"]
Below is the Cronjob configuration:
apiVersion: batch/v1
kind: CronJob
metadata:
name: etcd-backup
namespace: etcd-backup
spec:
schedule: "/5 * * * "
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 5
concurrencyPolicy: Allow
jobTemplate:
spec:
template:
spec:
containers:
- name: etcd-backup
image: artifacts.corp.x**.com/etcd-backup:v3.5.5
imagePullPolicy: IfNotPresent
env:
- name: ETCDCTL_API
value: "3"
- name: ETCDCTL_ENDPOINTS
value: "https://10.150.136.133:2379,https://10.150.***.***:2379,https://10.***.**.**:2379"
- name: ETCDCTL_CACERT
value: "/etc/kubernetes/pki/etcd/ca.crt"
- name: ETCDCTL_CERT
value: "/etc/kubernetes/pki/etcd/client.crt"
- name: ETCDCTL_KEY
value: "/etc/kubernetes/pki/etcd/client.key"
- name: ARTIFACTORY_API_KEY
valueFrom:
secretKeyRef:
name: etcd-backup-artifactory-secret
key: ARTIFACTORY_API_KEY
command: ["/bin/bash","-c"]
args: ["ETCDCTL_API=3 etcdctl --endpoints=${ETCDCTL_ENDPOINTS} --cacert=${ETCDCTL_CACERT} --cert=${ETCDCTL_CERT} --key=${ETCDCTL_KEY} snapshot save /var/etcd-backup/etcd-snapshot-$(date +%Y-%m-%dT%H:%M).db"]
volumeMounts:
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
readOnly: true
- mountPath: /var/etcd-backup
name: etcd-backup
- mountPath: /etc/localtime
name: local-timezone
restartPolicy: OnFailure
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
volumes:
- name: etcd-certs
hostPath:
path: /etc/kubernetes/pki/etcd
type: Directory
- name: etcd-backup
hostPath:
path: /var/etcd-backup
type: DirectoryOrCreate
- name: local-timezone
hostPath:
path: /usr/share/zoneinfo/America/Los_Angeles
imagePullSecrets:
- name: etcd-backup-regcred
etcdctl version in my existing cluster
$ etcdctl --version
etcdctl version: 3.2.18
API version: 2
What did you expect to happen?
I am expecting this Cronjob should work and able to save backup at mentioned path.
How can we reproduce it (as minimally and precisely as possible)?
Please use my Dockerfile to create this image using below command.
Then, use cronjob configuration to create cronjob in kubernetes cluster. then check logs.
Anything else we need to know?
No response
Etcd version (please run commands below)
Etcd configuration (command line flags or environment variables)
paste your configuration here
Etcd debug information (please run commands blow, feel free to obfuscate the IP address or FQDN in the output)
Relevant log output
No response
The text was updated successfully, but these errors were encountered: