Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure k8s 1.8 works with Contiv #39

Open
blaksmit opened this issue Oct 17, 2017 · 6 comments
Open

Ensure k8s 1.8 works with Contiv #39

blaksmit opened this issue Oct 17, 2017 · 6 comments

Comments

@blaksmit
Copy link
Owner

Description

Currently we support k8s 1.4 and 1.6. We need to ensure k8s 1.8 is working correctly with Contiv 1.1.5+.

Acceptance Criteria.

1. Ensure k8s 1.7 is working correctly with Contiv 1.1.5+
2. Update required install files

[CNTV-100] created by kahou.lei
@blaksmit
Copy link
Owner Author

blaksmit commented Nov 1, 2017

Hopefully this work can be done in https://github.com/metacloud/kubespray, I am working on rebasing to latest from upstream and should be done by the end of this week.

by vijkatam

@blaksmit
Copy link
Owner Author

blaksmit commented Nov 1, 2017

Solar ticket -> SOLAR-2458 Open

by vijkatam

@blaksmit
Copy link
Owner Author

blaksmit commented Nov 1, 2017

Sorry for the confusion, I am not doing the rebase anymore.

by vijkatam

@blaksmit
Copy link
Owner Author

K8s 1.8.2 is working with Contiv 1.1.7 in vagrant environment. Two pods on default network are able to ping.

[vagrant@kubeadm-master ~]$ rpm -qa | grep kub
kubernetes-cni-0.5.1-1.x86_64
kubeadm-1.8.2-0.x86_64
kubelet-1.8.2-0.x86_64
kubectl-1.8.2-0.x86_64
[vagrant@kubeadm-master ~]$ netctl net ls
Tenant   Network      Nw Type  Encap type  Packet tag  SubnetGateway    IPv6Subnet  IPv6Gateway  Cfgd Tag 
------   -------      -------  ----------  ----------  -------       ------     ----------  -----------  ---------
default  contivh1     infra    vxlan       0   132.1.1.0/24  132.1.1.1   
default  default-net  data     vxlan       0   20.1.1.0/24   20.1.1.1    

[vagrant@kubeadm-master ~]$ netctl group create default-net default
Creating EndpointGroup default:default
[vagrant@kubeadm-master ~]$ netctl group ls
Tenant Group Network IP Pool CfgdTag Policies Network profile


default default default-net
[vagrant@kubeadm-master ~]$ cat bb_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: bb1
labels:
app: test
io.contiv.tenant: default
io.contiv.network: default-net
io.contiv.net-group: default
spec:
containers:

  • name: busybox
    image: busybox:latest
    command:
    • tail
    • -f
    • /dev/null
      restartPolicy: Never
      [vagrant@kubeadm-master ~]$ kubectl create -f bb_pod.yaml
      pod "bb1" created
      [vagrant@kubeadm-master ~]$ kubectl create -f bb_pod.yaml
      pod "bb2" created
      [vagrant@kubeadm-master ~]$ kubectl get pods -o wide
      NAME READY STATUS RESTARTS AGE IP NODE
      bb1 1/1 Running 0 28s 20.1.1.3 kubeadm-worker0
      bb2 1/1 Running 0 3s20.1.1.4 kubeadm-worker0
      [vagrant@kubeadm-master ~]$ kubectl exec -it bb1 sh
      / # ifconfig
      eth0 Link encap:Ethernet HWaddr 02:02:14:01:01:03
      inet addr:20.1.1.3 Bcast:0.0.0.0 Mask:255.255.255.0
      inet6 addr: fe80::2:14ff:fe01:103/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
      RX packets:8 errors:0 dropped:0 overruns:0 frame:0
      TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)

loLink encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

/ # ping 20.1.1.4
PING 20.1.1.4 (20.1.1.4): 56 data bytes
64 bytes from 20.1.1.4: seq=0 ttl=64 time=39.250 ms
64 bytes from 20.1.1.4: seq=1 ttl=64 time=0.062 ms
^C
--- 20.1.1.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.062/19.656/39.250 ms
/ #

by amccormi

@blaksmit
Copy link
Owner Author

K8s 1.8.2 is working with Contiv 1.1.7 in proxmox environment. Two pods on default network are able to ping.

[root@mcp1 tmp]# netctl tenant create Tester
Creating tenant: Tester
[root@mcp1 tmp]# netctl net create --subnet=1.1.1.0/24 -g 1.1.1.1 -t Tester TestNet
Creating network Tester:TestNet
root@mcp1 tmp]# netctl group create -t Tester TestNet TestEPG
Creating EndpointGroup Tester:TestEPG
[root@mcp1 tmp]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: bb1
  labels:
    app: test
    io.contiv.tenant: Tester
    io.contiv.network: TestNet 
    io.contiv.net-group: TestEPG 
spec:
  containers:
  - name: busybox
    image: busybox:latest
    command:
      - tail
      - -f
      - /dev/null
  restartPolicy: Never
[root@mcp1 tmp]# kubectl create -f k8s_pod.yaml
pod "bb2" created
[root@mcp1 tmp]# kubectl create -f k8s_pod.yaml
pod "bb1" created
[root@mcp1 tmp]# kubectl get pods  -o wide
NAME   READY     STATUS    RESTARTS   AGE       IP    NODE
bb1    1/1       Running   0  6s1.1.1.3       mhv2
bb2    1/1       Running   0  24s       1.1.1.2       mhv1
[root@mcp1 tmp]# kubectl exec -it bb2 sh
/ # ip a
1: lo:  mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
34: eth0@if33:  mtu 1450 qdisc noqueue 
    link/ether 02:02:01:01:01:02 brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:1ff:fe01:102/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping 1.1.1.3
PING 1.1.1.3 (1.1.1.3): 56 data bytes
64 bytes from 1.1.1.3: seq=0 ttl=64 time=1.350 ms
64 bytes from 1.1.1.3: seq=1 ttl=64 time=0.226 ms
by amccormi

@blaksmit
Copy link
Owner Author

PR for the vagrant work: contiv/install#311

by amccormi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant