Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dev to alpha #2431

Merged
merged 50 commits into from
Aug 20, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
5b4d595
Update to Kubernetes v1.14
mikkeloscar May 27, 2019
b95d30c
Update to version 1.14.4
arjunrn Jul 10, 2019
5fd4b16
Update to Kubernetes v1.14.4
mikkeloscar Jul 11, 2019
94e533e
Update PriorityClass apiVersion to scheduling.k8s.io/v1
mikkeloscar Jul 16, 2019
3949906
Update Ingresses to networking.k8s.io/v1beta1
mikkeloscar Jul 16, 2019
9a46276
Add ec2-instance-connect support
mikkeloscar Jul 18, 2019
e54a108
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 22, 2019
cc18f00
Merge pull request #2335 from zalando-incubator/dev-to-kube-1.14
mikkeloscar Jul 22, 2019
aac17a7
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 23, 2019
d3a3081
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 23, 2019
0d129d4
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 24, 2019
3f98800
Merge pull request #2338 from zalando-incubator/dev-to-kube-1.14
linki Jul 25, 2019
da70222
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
f5e5c81
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
eebfe94
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
61a07a2
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
8095fd4
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
6d67258
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 25, 2019
030656d
Updated the AMI with nvidia 410 drivers
arjunrn Jul 25, 2019
c085056
Merge pull request #2344 from zalando-incubator/dev-to-kube-1.14
mikkeloscar Jul 25, 2019
e11a0f5
Merge pull request #2351 from zalando-incubator/update-ubuntu-ami-114
mikkeloscar Jul 26, 2019
39ecb54
Merge branch 'dev' into kube-1.14
mikkeloscar Jul 26, 2019
16fb46f
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 26, 2019
416e1b1
Merge pull request #2362 from zalando-incubator/dev-to-kube-1.14
mikkeloscar Jul 26, 2019
0320e40
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 29, 2019
16d2620
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 29, 2019
e47bd9d
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 29, 2019
277f841
Merge pull request #2368 from zalando-incubator/dev-to-kube-1.14
arjunrn Jul 30, 2019
e1d24e9
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 30, 2019
10730e8
Merge pull request #2371 from zalando-incubator/dev-to-kube-1.14
mikkeloscar Jul 30, 2019
04d2e9a
Merge branch 'dev' into dev-to-kube-1.14
aermakov-zalando Jul 31, 2019
c63f856
Merge branch 'dev' into dev-to-kube-1.14
aermakov-zalando Jul 31, 2019
e05cda5
Use the correct AMI for 1.14
aermakov-zalando Jul 31, 2019
bb7808f
Merge pull request #2384 from zalando-incubator/dev-to-kube-1.14
aermakov-zalando Jul 31, 2019
ccd75c9
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Jul 31, 2019
e59c887
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Aug 1, 2019
f0e026f
Merge pull request #2387 from zalando-incubator/dev-to-kube-1.14
linki Aug 1, 2019
0af1d0c
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Aug 6, 2019
f3a7e13
Merge pull request #2395 from zalando-incubator/dev-to-kube-1.14
Aug 8, 2019
001ac9e
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Aug 8, 2019
296c7b3
Merge pull request #2399 from zalando-incubator/dev-to-kube-1.14
Aug 8, 2019
e3a10c0
Merge branch 'dev' into dev-to-kube-1.14
aermakov-zalando Aug 13, 2019
eef6386
Merge pull request #2415 from zalando-incubator/dev-to-kube-1.14
aermakov-zalando Aug 14, 2019
bdd7692
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Aug 14, 2019
90e8028
Merge dev to dev-to-kube-1.14
zalando-teapot-robot Aug 15, 2019
9e28f5b
Merge pull request #2423 from zalando-incubator/dev-to-kube-1.14
aermakov-zalando Aug 15, 2019
f8198d3
Merge branch 'dev' into dev-to-kube-1.14
aermakov-zalando Aug 16, 2019
8e95d8d
Merge pull request #2428 from zalando-incubator/dev-to-kube-1.14
aermakov-zalando Aug 19, 2019
efcbf49
Merge pull request #2175 from zalando-incubator/kube-1.14
aermakov-zalando Aug 19, 2019
1275a53
Merge dev to dev-to-alpha
zalando-teapot-robot Aug 19, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions cluster/config-defaults.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ teapot_admission_controller_validate_application_label: "false"
{{end}}

{{if eq .Environment "e2e"}}
teapot_admission_controller_ignore_namespaces: "^kube-system|(e2e-tests-(downward-api|kubectl|projected|statefulset|pod-network)-.*)$"
teapot_admission_controller_ignore_namespaces: "^kube-system|((downward-api|kubectl|projected|statefulset|pod-network)-.*)$"
{{else}}
teapot_admission_controller_ignore_namespaces: "^kube-system$"
{{end}}
Expand All @@ -219,8 +219,7 @@ cluster_dns: "coredns"
coredns_log_svc_names: "true"

coreos_image: "ami-0d1579b60bb706fb7" # Container Linux 2079.6.0 (HVM, eu-central-1)
kuberuntu_image: {{ amiID "zalando-ubuntu-kubernetes-production-v1.13.7-master-51" "861068367966" }}

kuberuntu_image: {{ amiID "zalando-ubuntu-kubernetes-production-v1.14.5-master-51" "861068367966" }}

# Feature toggle to allow gradual decommissioning of ingress-template-controller
enable_ingress_template_controller: "false"
Expand Down
2 changes: 1 addition & 1 deletion cluster/manifests/01-visibility/priority-logging.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: scheduling.k8s.io/v1alpha1
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: visibility-logging
Expand Down
2 changes: 1 addition & 1 deletion cluster/manifests/01-visibility/priority.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: scheduling.k8s.io/v1alpha1
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: visibility-zmon
Expand Down
2 changes: 1 addition & 1 deletion cluster/manifests/emergency-access-service/ingress.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{{ if eq .Environment "production" }}
apiVersion: extensions/v1beta1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: emergency-access-service
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: scheduling.k8s.io/v1alpha1
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: autoscaling-buffer
Expand Down
6 changes: 3 additions & 3 deletions cluster/manifests/kube-proxy/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
namespace: kube-system
labels:
application: kube-proxy
version: v1.13.7
version: v1.14.4
spec:
selector:
matchLabels:
Expand All @@ -17,7 +17,7 @@ spec:
name: kube-proxy
labels:
application: kube-proxy
version: v1.13.7
version: v1.14.4
annotations:
config/hash: {{"configmap.yaml" | manifestHash}}
spec:
Expand All @@ -31,7 +31,7 @@ spec:
hostNetwork: true
containers:
- name: kube-proxy
image: registry.opensource.zalan.do/teapot/kube-proxy:v1.13.7
image: registry.opensource.zalan.do/teapot/kube-proxy:v1.14.4
args:
- --hostname-override=$(HOSTNAME_OVERRIDE)
- --config=/config/kube-proxy.yaml
Expand Down
18 changes: 9 additions & 9 deletions cluster/node-pools/master-default/userdata.clc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ systemd:
After=docker.service dockercfg.service meta-data-iptables.service private-ipv4.service

[Service]
Environment=KUBELET_IMAGE_TAG=v1.13.7
Environment=KUBELET_IMAGE_TAG=v1.14.4
Environment=KUBELET_IMAGE_ARGS=--exec=/kubelet
Environment=KUBELET_IMAGE_URL=docker://registry.opensource.zalan.do/teapot/kubelet
Environment="RKT_RUN_ARGS=--insecure-options=image \
Expand Down Expand Up @@ -361,7 +361,7 @@ storage:
namespace: kube-system
labels:
application: kube-apiserver
version: v1.13.7
version: v1.14.4
annotations:
kubernetes-log-watcher/scalyr-parser: |
[{"container": "webhook", "parser": "json-structured-log"}]
Expand All @@ -373,7 +373,7 @@ storage:
hostNetwork: true
containers:
- name: kube-apiserver
image: registry.opensource.zalan.do/teapot/kube-apiserver:v1.13.7
image: registry.opensource.zalan.do/teapot/kube-apiserver:v1.14.4
args:
- --apiserver-count={{ .Values.apiserver_count }}
- --bind-address=0.0.0.0
Expand Down Expand Up @@ -711,15 +711,15 @@ storage:
namespace: kube-system
labels:
application: kube-controller-manager
version: v1.13.7
version: v1.14.4
spec:
priorityClassName: system-node-critical
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: kube-controller-manager
image: registry.opensource.zalan.do/teapot/kube-controller-manager:v1.13.7
image: registry.opensource.zalan.do/teapot/kube-controller-manager:v1.14.4
args:
- --kubeconfig=/etc/kubernetes/controller-kubeconfig
- --leader-elect=true
Expand Down Expand Up @@ -780,7 +780,7 @@ storage:
namespace: kube-system
labels:
application: kube-scheduler
version: v1.13.7
version: v1.14.4
spec:
priorityClassName: system-node-critical
tolerations:
Expand All @@ -789,7 +789,7 @@ storage:
hostNetwork: true
containers:
- name: kube-scheduler
image: registry.opensource.zalan.do/teapot/kube-scheduler:v1.13.7
image: registry.opensource.zalan.do/teapot/kube-scheduler:v1.14.4
args:
- --master=http://127.0.0.1:8080
- --leader-elect=true
Expand Down Expand Up @@ -1248,7 +1248,7 @@ storage:
--volume dns,kind=host,source=/run/systemd/resolve/resolv.conf,readOnly=true \
--mount volume=dns,target=/etc/resolv.conf \
--net=host \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.13.7 \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.14.4 \
--exec=/kubectl -- \
--kubeconfig=/etc/kubernetes/kubeconfig \
label node "$(hostname)" \
Expand All @@ -1261,7 +1261,7 @@ storage:
--net=host \
--volume dns,kind=host,source=/run/systemd/resolve/resolv.conf,readOnly=true \
--mount volume=dns,target=/etc/resolv.conf \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.13.7 \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.14.4 \
--exec=/kubectl -- \
--kubeconfig=/etc/kubernetes/kubeconfig \
drain "$(hostname)" \
Expand Down
6 changes: 3 additions & 3 deletions cluster/node-pools/worker-default/userdata.clc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ systemd:
After=docker.service dockercfg.service meta-data-iptables.service private-ipv4.service collect-instance-metadata.service

[Service]
Environment=KUBELET_IMAGE_TAG=v1.13.7
Environment=KUBELET_IMAGE_TAG=v1.14.4
Environment=KUBELET_IMAGE_ARGS=--exec=/kubelet
Environment=KUBELET_IMAGE_URL=docker://registry.opensource.zalan.do/teapot/kubelet
Environment="RKT_RUN_ARGS=--insecure-options=image \
Expand Down Expand Up @@ -488,7 +488,7 @@ storage:
--volume dns,kind=host,source=/run/systemd/resolve/resolv.conf,readOnly=true \
--mount volume=dns,target=/etc/resolv.conf \
--net=host \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.13.7 \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.14.4 \
--exec=/kubectl -- \
--kubeconfig=/etc/kubernetes/kubeconfig \
label node "$(hostname)" \
Expand All @@ -501,7 +501,7 @@ storage:
--net=host \
--volume dns,kind=host,source=/run/systemd/resolve/resolv.conf,readOnly=true \
--mount volume=dns,target=/etc/resolv.conf \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.13.7 \
docker://registry.opensource.zalan.do/teapot/kubectl:v1.14.4 \
--exec=/kubectl -- \
--kubeconfig=/etc/kubernetes/kubeconfig \
drain "$(hostname)" \
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

BINARY ?= kubernetes-on-aws-e2e
VERSION ?= $(shell git describe --tags --always --dirty)
KUBE_VERSION ?= v1.13.5
KUBE_VERSION ?= v1.14.4
IMAGE ?= registry-write.opensource.zalan.do/teapot/$(BINARY)
TAG ?= $(VERSION)
DOCKERFILE ?= Dockerfile
Expand Down
18 changes: 9 additions & 9 deletions test/e2e/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,11 @@ scratch and test the Kubernetes type Foo.
defer func() {
By("deleting the foo)
defer GinkgoRecover()
err2 := cs.Core().Foo(ns).Delete(foo.Name, metav1.NewDeleteOptions(0))
err2 := cs.CoreV1().Foo(ns).Delete(foo.Name, metav1.NewDeleteOptions(0))
Expect(err2).NotTo(HaveOccurred())
}()
// creates the Ingress Object
_, err := cs.Core().Foo(ns).Create(foo)
_, err := cs.CoreV1().Foo(ns).Create(foo)
Expect(err).NotTo(HaveOccurred())
})
})
Expand All @@ -114,10 +114,10 @@ scratch and test the Kubernetes type Foo.
defer func() {
By("deleting the pod")
defer GinkgoRecover()
err2 := cs.Core().Pods(ns).Delete(pod.Name, metav1.NewDeleteOptions(0))
err2 := cs.CoreV1().Pods(ns).Delete(pod.Name, metav1.NewDeleteOptions(0))
Expect(err2).NotTo(HaveOccurred())
}()
_, err = cs.Core().Pods(ns).Create(pod)
_, err = cs.CoreV1().Pods(ns).Create(pod)
Expect(err).NotTo(HaveOccurred())
framework.ExpectNoError(f.WaitForPodRunning(pod.Name))
```
Expand All @@ -138,10 +138,10 @@ scratch and test the Kubernetes type Foo.
defer func() {
By("deleting the service")
defer GinkgoRecover()
err2 := cs.Core().Services(ns).Delete(service.Name, metav1.NewDeleteOptions(0))
err2 := cs.CoreV1().Services(ns).Delete(service.Name, metav1.NewDeleteOptions(0))
Expect(err2).NotTo(HaveOccurred())
}()
_, err := cs.Core().Services(ns).Create(service)
_, err := cs.CoreV1().Services(ns).Create(service)
Expect(err).NotTo(HaveOccurred())
```

Expand All @@ -164,14 +164,14 @@ Create Kubernetes ingress object:
defer func() {
By("deleting the ingress")
defer GinkgoRecover()
err2 := cs.Extensions().Ingresses(ns).Delete(ing.Name, metav1.NewDeleteOptions(0))
err2 := cs.ExtensionsV1beta1().Ingresses(ns).Delete(ing.Name, metav1.NewDeleteOptions(0))
Expect(err2).NotTo(HaveOccurred())
}()
ingressCreate, err := cs.Extensions().Ingresses(ns).Create(ing)
ingressCreate, err := cs.ExtensionsV1beta1().Ingresses(ns).Create(ing)
Expect(err).NotTo(HaveOccurred())
addr, err := jig.WaitForIngressAddress(cs, ns, ingressCreate.Name, 3*time.Minute)
Expect(err).NotTo(HaveOccurred())
ingress, err := cs.Extensions().Ingresses(ns).Get(ing.Name, metav1.GetOptions{ResourceVersion: "0"})
ingress, err := cs.ExtensionsV1beta1().Ingresses(ns).Get(ing.Name, metav1.GetOptions{ResourceVersion: "0"})
Expect(err).NotTo(HaveOccurred())
By(fmt.Sprintf("ALB endpoint from ingress status: %s", ingress.Status.LoadBalancer.Ingress[0].Hostname))
```
Expand Down
20 changes: 16 additions & 4 deletions test/e2e/apiserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ import (

. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"

appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/kubernetes"
"k8s.io/kubernetes/test/e2e/framework"
Expand All @@ -48,8 +50,13 @@ var _ = framework.KubeDescribe("API Server webhook tests", func() {
By("Creating deployment " + nameprefix + " in namespace " + ns)

deployment := createImagePolicyWebhookTestDeployment(nameprefix+"-", ns, tag, podname, replicas)
_, err := cs.ExtensionsV1beta1().Deployments(ns).Create(deployment)
defer deleteDeployment(cs, ns, deployment)
_, err := cs.AppsV1().Deployments(ns).Create(deployment)
defer func() {
By(fmt.Sprintf("Delete a compliant deployment: %s", deployment.Name))
defer GinkgoRecover()
err := cs.AppsV1().Deployments(ns).Delete(deployment.Name, metav1.NewDeleteOptions(0))
Expect(err).NotTo(HaveOccurred())
}()
Expect(err).NotTo(HaveOccurred())
label := map[string]string{
"app": podname,
Expand All @@ -72,9 +79,14 @@ var _ = framework.KubeDescribe("API Server webhook tests", func() {
By("Creating deployment " + nameprefix + " in namespace " + ns)

deployment := createImagePolicyWebhookTestDeployment(nameprefix+"-", ns, tag, podname, replicas)
_, err := cs.ExtensionsV1beta1().Deployments(ns).Create(deployment)
_, err := cs.AppsV1().Deployments(ns).Create(deployment)
Expect(err).NotTo(HaveOccurred())
defer deleteDeployment(cs, ns, deployment)
defer func() {
By(fmt.Sprintf("Delete a compliant deployment: %s", deployment.Name))
defer GinkgoRecover()
err := cs.AppsV1().Deployments(ns).Delete(deployment.Name, metav1.NewDeleteOptions(0))
Expect(err).NotTo(HaveOccurred())
}()
err = framework.WaitForDeploymentWithCondition(cs, ns, deployment.Name, "FailedCreate", appsv1.DeploymentReplicaFailure)
Expect(err).NotTo(HaveOccurred())
})
Expand Down
4 changes: 2 additions & 2 deletions test/e2e/audit.go
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,10 @@ func expectEvents(f *framework.Framework, expectedEvents []utils.AuditEvent) {
missingReport, err := utils.CheckAuditLines(stream, expectedEvents, auditv1.SchemeGroupVersion)
if err != nil {
framework.Logf("Failed to observe audit events: %v", err)
} else if len(missingReport) > 0 {
} else if len(missingReport.MissingEvents) > 0 {
framework.Logf("Events %#v not found!", missingReport)
}
return len(missingReport) == 0, nil
return len(missingReport.MissingEvents) == 0, nil
})
framework.ExpectNoError(err, "after %v failed to observe audit events", pollingTimeout)
}
8 changes: 4 additions & 4 deletions test/e2e/external_dns.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,26 +50,26 @@ var _ = framework.KubeDescribe("External DNS creation", func() {

By("Creating service " + serviceName + " in namespace " + ns)
defer func() {
err := cs.Core().Services(ns).Delete(serviceName, nil)
err := cs.CoreV1().Services(ns).Delete(serviceName, nil)
Expect(err).NotTo(HaveOccurred())
}()

hostName := fmt.Sprintf("%s-%d.%s", serviceName, time.Now().UTC().Unix(), E2EHostedZone())
service := createServiceTypeLoadbalancer(serviceName, hostName, labels, port)

_, err := cs.Core().Services(ns).Create(service)
_, err := cs.CoreV1().Services(ns).Create(service)
Expect(err).NotTo(HaveOccurred())

By("Submitting the pod to kubernetes")
pod := createNginxPod(nameprefix, ns, labels, port)
defer func() {
By("deleting the pod")
defer GinkgoRecover()
err2 := cs.Core().Pods(ns).Delete(pod.Name, metav1.NewDeleteOptions(0))
err2 := cs.CoreV1().Pods(ns).Delete(pod.Name, metav1.NewDeleteOptions(0))
Expect(err2).NotTo(HaveOccurred())
}()

_, err = cs.Core().Pods(ns).Create(pod)
_, err = cs.CoreV1().Pods(ns).Create(pod)
Expect(err).NotTo(HaveOccurred())

framework.ExpectNoError(f.WaitForPodRunning(pod.Name))
Expand Down
Loading