-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure k8s 1.8 works with Contiv #39
Comments
Hopefully this work can be done in https://github.com/metacloud/kubespray, I am working on rebasing to latest from upstream and should be done by the end of this week. by vijkatam |
Solar ticket -> SOLAR-2458 Open by vijkatam |
Sorry for the confusion, I am not doing the rebase anymore. by vijkatam |
K8s 1.8.2 is working with Contiv 1.1.7 in vagrant environment. Two pods on default network are able to ping. [vagrant@kubeadm-master ~]$ rpm -qa | grep kub kubernetes-cni-0.5.1-1.x86_64 kubeadm-1.8.2-0.x86_64 kubelet-1.8.2-0.x86_64 kubectl-1.8.2-0.x86_64 [vagrant@kubeadm-master ~]$ netctl net ls Tenant Network Nw Type Encap type Packet tag SubnetGateway IPv6Subnet IPv6Gateway Cfgd Tag ------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- --------- default contivh1 infra vxlan 0 132.1.1.0/24 132.1.1.1 default default-net data vxlan 0 20.1.1.0/24 20.1.1.1
loLink encap:Local Loopback / # ping 20.1.1.4 |
K8s 1.8.2 is working with Contiv 1.1.7 in proxmox environment. Two pods on default network are able to ping. [root@mcp1 tmp]# netctl tenant create Tester Creating tenant: Tester [root@mcp1 tmp]# netctl net create --subnet=1.1.1.0/24 -g 1.1.1.1 -t Tester TestNet Creating network Tester:TestNet root@mcp1 tmp]# netctl group create -t Tester TestNet TestEPG Creating EndpointGroup Tester:TestEPG [root@mcp1 tmp]# cat k8s_pod.yaml apiVersion: v1 kind: Pod metadata: name: bb1 labels: app: test io.contiv.tenant: Tester io.contiv.network: TestNet io.contiv.net-group: TestEPG spec: containers: - name: busybox image: busybox:latest command: - tail - -f - /dev/null restartPolicy: Never [root@mcp1 tmp]# kubectl create -f k8s_pod.yaml pod "bb2" created [root@mcp1 tmp]# kubectl create -f k8s_pod.yaml pod "bb1" created [root@mcp1 tmp]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE bb1 1/1 Running 0 6s1.1.1.3 mhv2 bb2 1/1 Running 0 24s 1.1.1.2 mhv1 [root@mcp1 tmp]# kubectl exec -it bb2 sh / # ip a 1: lo: mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 34: eth0@if33: mtu 1450 qdisc noqueue link/ether 02:02:01:01:01:02 brd ff:ff:ff:ff:ff:ff inet 1.1.1.2/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::2:1ff:fe01:102/64 scope link valid_lft forever preferred_lft forever / # ping 1.1.1.3 PING 1.1.1.3 (1.1.1.3): 56 data bytes 64 bytes from 1.1.1.3: seq=0 ttl=64 time=1.350 ms 64 bytes from 1.1.1.3: seq=1 ttl=64 time=0.226 ms |
PR for the vagrant work: contiv/install#311 by amccormi |
Description
Currently we support k8s 1.4 and 1.6. We need to ensure k8s 1.8 is working correctly with Contiv 1.1.5+.
Acceptance Criteria.
1. Ensure k8s 1.7 is working correctly with Contiv 1.1.5+
[CNTV-100] created by kahou.lei2. Update required install files
The text was updated successfully, but these errors were encountered: