Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestStartStop/old-k8s-version/VerifyKubernetesImages fails with 'unknown service runtime.v1.ImageService' when CR is docker #17646

Closed
prezha opened this issue Nov 18, 2023 · 4 comments · Fixed by #17647
Assignees
Labels
area/testing kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.

Comments

@prezha
Copy link
Contributor

prezha commented Nov 18, 2023

What Happened?

as seen in logs (eg, KVM_Linux 17553), replicated locally:

$ make integration -e TEST_ARGS="-minikube-start-args='--driver=kvm2 --container-runtime=docker --alsologtostderr -v=7' -test.run TestStartStop/group/old-k8s-version --cleanup=false"
go test -ldflags="-X k8s.io/minikube/pkg/version.version=v1.32.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.32.1-1699648094-17581 -X k8s.io/minikube/pkg/version.gitCommitID="8ac4c93b6f3318c0f631b68a0c8f5399f45a5807-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -v -test.timeout=90m ./test/integration --tags="integration " -minikube-start-args='--driver=kvm2 --container-runtime=docker --alsologtostderr -v=7' -test.run TestStartStop/group/old-k8s-version --cleanup=false 2>&1 | tee "./out/testout_8ac4c93b6.txt"
Found 16 cores, limiting parallelism with --test.parallel=9
=== RUN   TestStartStop
=== PAUSE TestStartStop
=== CONT  TestStartStop
=== RUN   TestStartStop/group
=== RUN   TestStartStop/group/old-k8s-version
=== PAUSE TestStartStop/group/old-k8s-version
=== CONT  TestStartStop/group/old-k8s-version
=== RUN   TestStartStop/group/old-k8s-version/serial
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
    start_stop_delete_test.go:186: (dbg) Run:  out/minikube start -p old-k8s-version-473421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=docker --alsologtostderr -v=7 --kubernetes-version=v1.16.0
    start_stop_delete_test.go:186: (dbg) Done: out/minikube start -p old-k8s-version-473421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=docker --alsologtostderr -v=7 --kubernetes-version=v1.16.0: (2m15.004893489s)
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
    start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-473421 create -f testdata/busybox.yaml
    start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
    helpers_test.go:344: "busybox" [cd70b616-3ccd-46b2-a775-d8e8facf3537] Pending
    helpers_test.go:344: "busybox" [cd70b616-3ccd-46b2-a775-d8e8facf3537] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
    helpers_test.go:344: "busybox" [cd70b616-3ccd-46b2-a775-d8e8facf3537] Running
    start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.049170462s
    start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-473421 exec busybox -- /bin/sh -c "ulimit -n"
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
    start_stop_delete_test.go:205: (dbg) Run:  out/minikube addons enable metrics-server -p old-k8s-version-473421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
    start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-473421 describe deploy/metrics-server -n kube-system
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
    start_stop_delete_test.go:228: (dbg) Run:  out/minikube stop -p old-k8s-version-473421 --alsologtostderr -v=3
    start_stop_delete_test.go:228: (dbg) Done: out/minikube stop -p old-k8s-version-473421 --alsologtostderr -v=3: (12.202966843s)
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
    start_stop_delete_test.go:239: (dbg) Run:  out/minikube status --format={{.Host}} -p old-k8s-version-473421 -n old-k8s-version-473421
    start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube status --format={{.Host}} -p old-k8s-version-473421 -n old-k8s-version-473421: exit status 7 (92.126937ms)

        -- stdout --
                Stopped

        -- /stdout --
    start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
    start_stop_delete_test.go:246: (dbg) Run:  out/minikube addons enable dashboard -p old-k8s-version-473421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
    start_stop_delete_test.go:256: (dbg) Run:  out/minikube start -p old-k8s-version-473421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=docker --alsologtostderr -v=7 --kubernetes-version=v1.16.0
    start_stop_delete_test.go:256: (dbg) Done: out/minikube start -p old-k8s-version-473421 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --container-runtime=docker --alsologtostderr -v=7 --kubernetes-version=v1.16.0: (7m20.464737746s)
    start_stop_delete_test.go:262: (dbg) Run:  out/minikube status --format={{.Host}} -p old-k8s-version-473421 -n old-k8s-version-473421
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
    start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
    helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cd9px" [d330aa7b-85c0-4819-92a6-dbf3e5fa23e5] Running
    start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014493652s
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
    start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
    helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cd9px" [d330aa7b-85c0-4819-92a6-dbf3e5fa23e5] Running
    start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018790122s
    start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-473421 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
    start_stop_delete_test.go:304: (dbg) Run:  out/minikube ssh -p old-k8s-version-473421 "sudo crictl images -o json"
    start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube ssh -p old-k8s-version-473421 "sudo crictl images -o json": exit status 1 (249.292286ms)

        -- stdout --
                FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService

        -- /stdout --
        ** stderr **
                ssh: Process exited with status 1

        ** /stderr **
    start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube ssh -p old-k8s-version-473421 \"sudo crictl images -o json\"": exit status 1
    start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
        FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService
    start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
          []string{
        -       "k8s.gcr.io/coredns:1.6.2",
        -       "k8s.gcr.io/etcd:3.3.15-0",
        -       "k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
        -       "k8s.gcr.io/kube-apiserver:v1.16.0",
        -       "k8s.gcr.io/kube-controller-manager:v1.16.0",
        -       "k8s.gcr.io/kube-proxy:v1.16.0",
        -       "k8s.gcr.io/kube-scheduler:v1.16.0",
        -       "k8s.gcr.io/pause:3.1",
          }
    helpers_test.go:222: -----------------------post-mortem--------------------------------
...

Attach the log file

.

Operating System

None

Driver

None

@afbjorklund
Copy link
Collaborator

Same thing would happen with containerd, the installed version of crictl is too new for these.

@spowelljr
Copy link
Member

Same thing would happen with containerd, the installed version of crictl is too new for these.

It actually seems to work for containerd, why though I'm unsure.

@prezha
Copy link
Contributor Author

prezha commented Nov 21, 2023

Same thing would happen with containerd, the installed version of crictl is too new for these.

It actually seems to work for containerd, why though I'm unsure.

when containerd is used as the container runtime, we use the latest v1.7.9 atm, which supports v1 API, and crictl is set (via /etc/crictl.yaml) to use it via runtime-endpoint: unix:///run/containerd/containerd.sock, so we should not expect issues there i think?

# crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.9
RuntimeApiVersion:  v1
# crictl --version
crictl version v1.28.0

@spowelljr
Copy link
Member

I see, looking at the crictl support matrix it's still not ideal (K8s 1.16.0 with crictl v1.28.0), but is out of scope for this issue which is fixing the test.

@spowelljr spowelljr added area/testing kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. labels Nov 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.
Projects
None yet
3 participants