Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support IPv6DualStack with Canal #2769

Closed
n1kofr opened this issue Dec 6, 2021 · 4 comments
Closed

Support IPv6DualStack with Canal #2769

n1kofr opened this issue Dec 6, 2021 · 4 comments

Comments

@n1kofr
Copy link

n1kofr commented Dec 6, 2021

RKE version:
v1.3.3-rc4

Docker version: (docker version,docker info preferred)
20.10.11

Operating system and kernel: (cat /etc/os-release, uname -r preferred)

cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.4"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.4:GA"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.4"

uname -r
4.18.0-305.19.1.el8_4.x86_64

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Openstack / KVM

cluster.yml file:

nodes:
  - address: 10.71.8.32
    internal_address: 10.71.6.32
    user: cloud-user
    hostname_override: k8smaster001
    role:
      - controlplane
      - etcd
  - address: 10.71.8.33
    internal_address: 10.71.6.33
    user: cloud-user
    hostname_override: k8smaster002
    role:
      - controlplane
      - etcd
  - address: 10.71.8.34
    internal_address: 10.71.6.34
    user: cloud-user
    hostname_override: k8smaster003
    role:
      - controlplane
      - etcd

  - address: 10.71.8.35
    internal_address: 10.71.6.35
    user: cloud-user
    hostname_override: k8sbe001
    taints:
    labels:
    role:
      - worker
  - address: 10.71.8.36
    internal_address: 10.71.6.36
    user: cloud-user
    hostname_override: k8sbe002
    taints:
    labels:
    role:
      - worker
  - address: 10.71.8.37
    internal_address: 10.71.6.37
    user: cloud-user
    hostname_override: k8sbe003
    taints:
    labels:
    role:
      - worker
  - address: 10.71.8.90
    internal_address: 10.71.6.90
    user: cloud-user
    hostname_override: k8sfe001
    taints:
      - effect: NoSchedule
        key: app
        value: edn
    labels:
      app: edn
    role:
      - worker
  - address: 10.71.8.91
    internal_address: 10.71.6.91
    user: cloud-user
    hostname_override: k8sfe002
    taints:
      - effect: NoSchedule
        key: app
        value: edn
    labels:
      app: edn
    role:
      - worker
  - address: 10.71.8.100
    internal_address: 10.71.6.100
    user: cloud-user
    hostname_override: k8suicc001
    taints:
      - effect: NoSchedule
        key: app
        value: wsn
    labels:
      app: wsn
    role:
      - worker
  - address: 10.71.8.110
    internal_address: 10.71.6.110
    user: cloud-user
    hostname_override: k8shss001
    taints:
      - effect: NoSchedule
        key: app
        value: hss
    labels:
      app: hss
    role:
      - worker

ignore_docker_version: false
cluster_name: cluster.local
kubernetes_version: v1.22.3-rancher1-1

services:
  kube-api:
    service_cluster_ip_range: 10.42.0.0/16,fc00::/112
    service_node_port_range: 30000-32767
    pod_security_policy: false
    always_pull_images: false
    extra_args:
      audit-log-path: "-"
      audit-log-format: "json"
      delete-collection-workers: 3
      v: 2
      encryption-provider-config: "/etc/kubernetes/ssl/encryption-config.yaml"

  kube-controller:
    cluster_cidr: 10.43.0.0/16,fc01::/112
    service_cluster_ip_range: 10.42.0.0/16,fc00::/112
    extra_args:
      node-cidr-mask-size-ipv6: "112"
      horizontal-pod-autoscaler-sync-period: "1m0s"
      horizontal-pod-autoscaler-tolerance: 0.1
      horizontal-pod-autoscaler-initial-readiness-delay: "1m0s"
      horizontal-pod-autoscaler-cpu-initialization-period: "5m0s"
      horizontal-pod-autoscaler-downscale-stabilization: "5m0s"

  kubeproxy:
    cluster_cidr: 10.43.0.0/16,fc01::/112
    #cluster_cidr: 10.43.0.0/16
    extra_args:
        proxy-mode: "ipvs"

  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.42.0.3
    fail_swap_on: false
    extra_args:
      max-pods: 20
      pod-manifest-path: /etc/kubernetes/manifests
      v: 2

authorization:
  mode: rbac

addon_job_timeout: 120

network:
  plugin: canal
  options:
    canal_iface: eth1
    canal_flannel_backend_type: vxlan

ingress:
  provider: none

dns:
  provider: coredns

Steps to Reproduce:

Following the issue #1902 I am opening a new issue as RKE 1.3.3 does not support yet DualStack with Canal. The following error is displayed when deploying a k8s cluster with the above configuration:

level=fatal msg="Failed to validate cluster: Network plugin [canal] does not support IPv6 (dualstack)

Results:

Could you add the support for DualStack with Canal?

@superseb
Copy link
Contributor

superseb commented Dec 9, 2021

This seems currently blocked by https://github.com/projectcalico/cni-plugin/issues/1177

@stale
Copy link

stale bot commented Feb 8, 2022

This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale label Feb 8, 2022
@n1kofr
Copy link
Author

n1kofr commented Feb 8, 2022

It looks like Calico release 3.22.0 contains the fix for Canal deployment with Dual Stack. Any update on RKE side when it will be integrated?

@stale stale bot removed the status/stale label Feb 8, 2022
@stale
Copy link

stale bot commented Apr 11, 2022

This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants