-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s部署nacos2.3.2后出现400问题 #12782
Comments
issue中没提供有效信息, 提供了很多无效信息,无法判断问题, 从描述中看, 前2次请求失败, 400 bad request, 第三次成功, 初步推测是前两次请求中参数不正确或非法,第三次才使用正确的请求。 |
需要提供更相关的有效信息,比如日志中是否有报错, 请求response中的错误信息等,不需要贴出k8s的yaml配置, 这个和nacos无关。 |
@KomachiSion
|
这个问题我后面解决了
是在这个配置中不符合规范,参考https://nacos.io/docs/latest/manual/admin/auth/ 正确修改后,就没有出现这种问题了 @KomachiSion @leoxhj |
目前来看确实是配置此变量导致的,后面复现重新尝试这个参数,还是会出现之前的问题,日志并无报错 |
没有看懂你上面的回复的意思,这二个参数默认我也有设置的,而且设置的值也是一致的,设置在application.properties下。 ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.enabled=${NACOS_AUTH_ENABLE:true}
nacos.core.auth.enable.userAgentAuthWhite=${NACOS_AUTH_USER_AGENT_AUTH_WHITE_ENABLE:false}
nacos.core.auth.server.identity.key=${NACOS_AUTH_IDENTITY_KEY:1234}
nacos.core.auth.server.identity.value=${NACOS_AUTH_IDENTITY_VALUE:1234}
所以你的问题是解决了还是没有解决? 另外默认也是有开启了token缓存的,我看了这个配置文件,18000秒 |
已经成功解决的
参照https://nacos.io/docs/latest/manual/admin/auth/ 这两个参数按照这个要求进行配置,token缓存我没有开启,再无出现400问题 |
你看我上面这2个值也有配置啊,难得这个value一定要是“YWRtaW4=”这个值?我是配置过这两个key的,但问题还是存在 |
这个我后面是这样设置的,key我使用的大写明文,value采用32位长度的base64编码的字符串 |
对我也是32位长度的base64编码,不过我的key,value是同一个32位编码的相同值,难到不能相同? |
这个其实我也不知道,排列组合试出来的结果 |
nacos-group/nacos-group.github.io#436 参考,当时我应该借鉴这个,时间太久记不太清楚了,此外我当时因为在抓取400的http包时候也是发现这个key显示异常,我的nacos.core.auth.server.identity.key异常显示在我的http包里面,所以才会针对这个寻找合适的解决办法 |
你转的这个贴子才是问题的关键,主要是我32位生成的编码里有一个"="号应该不符合HTTP Header的规范,我改了再试一下,我估计这个就是根因,非常感谢兄弟,我估计社区的开发可能也没有碰到这个CASE |
确认了,问题已经解决了,根因就是这个NACOS_AUTH_IDENTITY_KEY这个值不能有特殊字符,因为会被添加http header中,建议官方在这个key的文档中进行说明,不然还会有其它人踩坑400报错。 |
我有个问题,你开始设置的1234 应该是合规的把 |
考虑到安全问题,我把环境里真实的有问题32位编码的Key值用1234代替了,对如果是1234的话就不会有这个问题。 |
identitykey和value不需要使用base64, 只有secure.token需要至少32位长度,且需要base64编码。 |
是的,后面我看了下官方文档,key不需要使用base64,value建议为base64,这个问题因为我是初学者所以比较懵逼,感谢大佬 |
k8s部署nacos2.3.2
sorry,我在部署后集群状态正常但是只有一个节点可用,具体表现为第一次第二次失败(err:400 msg:badrequest)第三次成功,集群状态都是为正常,leader一致,这是我的yaml
kind: Service
apiVersion: v1
metadata:
name: **********
namespace: **********
labels:
app: nacos
app.kubernetes.io/name: nacos
app.kubernetes.io/version: v1
version: v1
annotations:
kubesphere.io/creator: admin
service.alpha.kubernetes.io/tolerate-unready-endpoints: 'true'
spec:
ports:
- name: server
protocol: TCP
port: 8848
targetPort: 8848
- name: grpc-1
protocol: TCP
port: 9848
targetPort: 9848
- name: grpc-2
protocol: TCP
port: 9849
targetPort: 9849
- name: old-raft-rpc
protocol: TCP
port: 7848
targetPort: 7848
selector:
app: **********
clusterIP: None
clusterIPs:
- None
type: ClusterIP
sessionAffinity: None
publishNotReadyAddresses: true
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: **********
namespace: **********
labels:
app: **********
annotations:
kubesphere.io/creator: admin
spec:
replicas: 3
selector:
matchLabels:
app: **********
template:
metadata:
creationTimestamp: null
labels:
app: **********
annotations:
kubesphere.io/creator: admin
kubesphere.io/imagepullsecrets: '{}'
kubesphere.io/restartedAt: '2024-08-05T09:35:56.513Z'
logging.kubesphere.io/logsidecar-config: '{}'
pod.alpha.kubernetes.io/initialized: 'true'
spec:
volumes:
- name: host-time
hostPath:
path: /etc/localtime
type: ''
- name: **********
configMap:
name: **********
items:
- key: application.properties
path: application.properties
defaultMode: 420
- name: **********
configMap:
name: **********
items:
- key: cluster.conf
path: cluster.conf
defaultMode: 420
containers:
- name: nacos-server
image: 'nacos/nacos-server:v2.2.3'
ports:
- name: client-port
containerPort: 8848
protocol: TCP
- name: client-rpc
containerPort: 9848
protocol: TCP
- name: raft-rpc
containerPort: 9849
protocol: TCP
- name: old-raft-rpc
containerPort: 7848
protocol: TCP
env:
- name: NACOS_REPLICAS
value: '3'
- name: SERVICE_NAME
value: ********
- name: DOMAIN_NAME
value: ********
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: SPRING_DATASOURCE_PLATFORM
value: mysql
- name: NACOS_SERVER_PORT
value: '8848'
- name: PREFER_HOST_MODE
value: hostname
- name: NACOS_AUTH_CACHE_ENABLE
value: 'true'
- name: NACOS_AUTH_ENABLE
value: 'true'
- name: NACOS_AUTH_IDENTITY_KEY
value: YWRtaW4=
- name: NACOS_AUTH_IDENTITY_VALUE
value: YWRtaW4=
- name: NACOS_AUTH_TOKEN
value: **********
- name: NACOS_SERVERS
value: >-
**********
**********
**********
- name: MODE
value: cluster
resources:
limits:
cpu: '4'
memory: 4Gi
volumeMounts:
- name: host-time
mountPath: /etc/localtime
- name: nacos-server
mountPath: /home/nacos/data
- name: **********
readOnly: true
mountPath: /home/nacos/conf/application.properties
subPath: application.properties
- name: volume-06oeda
readOnly: true
mountPath: /home/nacos/conf/cluster.conf
subPath: cluster.conf
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: default
serviceAccount: default
securityContext: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nacos
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nacos-server
namespace: ********
creationTimestamp: null
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: nfs
volumeMode: Filesystem
status:
phase: Pending
serviceName: nacos-headless
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
revisionHistoryLimit: 10
The text was updated successfully, but these errors were encountered: