Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

不支持External Control Plane方案架构 #319

Open
dryheartGo opened this issue Mar 16, 2023 · 7 comments
Open

不支持External Control Plane方案架构 #319

dryheartGo opened this issue Mar 16, 2023 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@dryheartGo
Copy link

使用External Control Plane方案部署Istio,limiter组件不支持

  • Slime version: 0.6.0
  • Istio version: 1.13.2
  • k8s version: 1.21.10

[ ] Configuration Lazy Loading
[ ] Http Plugin Management
[X] Adaptive Ratelimit
[ ] Slime Boot

limiter log:

time=2023-03-16T09:54:26Z level=info msg=get err in queryServicePods, get service namespace/xxx-service faild, services "xxx-service" not found module=limiter pkg=controllers

感觉是limiter使用了k8s api,无法找到remote cluster的CR信息

@dryheartGo dryheartGo added the bug Something isn't working label Mar 16, 2023
@cywang1905
Copy link
Contributor

cywang1905 commented Mar 21, 2023

目前 Slime 支持多主的多集群场景,每套集群单独部署一套 Slime ,只管理本集群。

主从架构的多集群场景我们没有验证过,可以尝试通过配置 configSources 字段,指定多个 apiserver 地址来实现。一个例子如下

# slimeboot cr
apiVersion: config.netease.com/v1alpha1
kind: SlimeBoot
metadata:
  name: limiter
  namespace: mesh-operator
spec:
  image:
    pullPolicy: Always
    repository: docker.io/slimeio/slime-limiter
    tag: v0.6.0_linux_amd64
  module:
    - name: limiter
      kind: limiter
      enable: true
      general:
        disableGlobalRateLimit: true
        disableAdaptive: true
        disableInsertGlobalRateLimit: true
      global:
        log:
          logLevel: info
        configSources:
          - address: k8s://xxx
          - address: k8s://xxx 

@whalecold
Copy link

k8s 不同集群之间 pod 的 ip 可能会有冲突,我看了下代码 sourceSvc 是根据 dowloadAddress 获取的,这样是不是就可能获取到错误的依赖关系了~

@cywang1905
Copy link
Contributor

k8s 不同集群之间 pod 的 ip 可能会有冲突,我看了下代码 sourceSvc 是根据 dowloadAddress 获取的,这样是不是就可能获取到错误的依赖关系了~

这里我们考虑的多集群场景前提是单网络,即多集群的pod ip可直接访问,不会彼此冲突

@xdccy
Copy link

xdccy commented May 25, 2023

目前 Slime 支持多主的多集群场景,每套集群单独部署一套 Slime ,只管理本集群。

主从架构的多集群场景我们没有验证过,可以尝试通过配置 configSources 字段,指定多个 apiserver 地址来实现。一个例子如下

# slimeboot cr
apiVersion: config.netease.com/v1alpha1
kind: SlimeBoot
metadata:
  name: limiter
  namespace: mesh-operator
spec:
  image:
    pullPolicy: Always
    repository: docker.io/slimeio/slime-limiter
    tag: v0.6.0_linux_amd64
  module:
    - name: limiter
      kind: limiter
      enable: true
      general:
        disableGlobalRateLimit: true
        disableAdaptive: true
        disableInsertGlobalRateLimit: true
      global:
        log:
          logLevel: info
        configSources:
          - address: k8s://xxx
          - address: k8s://xxx 

我们在主从模式下配置了configSource,格式为k8s://https://xxxxxxxx.eks.amazonaws.com。但是limiter日志中会报错:
level=error msg=[svcToIstioResHandler] [EventAdd] ConvertSvcAndEps error: failed to get related pod for endpoint。是我们的配置有问题吗?

@MouceL
Copy link
Contributor

MouceL commented May 25, 2023

这个configSource只实现了istio cofnigSource的部分功能

实际slime并没有从store里获取数据

@xdccy
Copy link

xdccy commented May 25, 2023

这个configSource只实现了istio cofnigSource的部分功能

实际slime并没有从store里获取数据

意思是在多主架构的场景下目前slime的支持还不足是嘛

@MouceL
Copy link
Contributor

MouceL commented May 25, 2023

基本没有考虑多主架构的场景

这方向有很多东西可以做

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants