Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure on finding service with default K8s cluster domain #327

Closed
hexknight01 opened this issue Oct 31, 2023 · 1 comment
Closed

Failure on finding service with default K8s cluster domain #327

hexknight01 opened this issue Oct 31, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@hexknight01
Copy link

hexknight01 commented Oct 31, 2023

Brief summary

K8s cluster domain is set to cluster.local by default. However, our K8s cluster has changed this config to others which caused the operator not to be able to find the service of an object whose kind is K6. Hence, the StartJob can not be started as expected.

The code to find the K8s service is hardcoded in the StartJobs function.

resp, err := http.Get(fmt.Sprintf("http://%v.%v.svc.cluster.local:6565/v1/status", service.ObjectMeta.Name, service.ObjectMeta.Namespace))

k6-operator version or image

https://github.com/grafana/k6-operator/releases/tag/v0.0.11

K6 YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "4"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"control-plane":"controller-manager"},"name":"k6-operator-controller-manager","namespace":"k6-operator-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"control-plane":"controller-manager"}},"template":{"metadata":{"labels":{"control-plane":"controller-manager"}},"spec":{"containers":[{"args":["--secure-listen-address=0.0.0.0:8443","--upstream=http://127.0.0.1:8080/","--logtostderr=true","--v=10"],"image":"gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0","name":"kube-rbac-proxy","ports":[{"containerPort":8443,"name":"https"}]},{"args":["--metrics-addr=127.0.0.1:8080","--enable-leader-election"],"command":["/manager"],"image":"registry-gitlab.zalopay.vn/top/docker-images/k6-extended:v1.1","name":"manager","resources":{"limits":{"cpu":"100m","memory":"100Mi"},"requests":{"cpu":"100m","memory":"50Mi"}}}],"serviceAccountName":"k6-operator-controller","terminationGracePeriodSeconds":10}}}}
  creationTimestamp: "2023-10-31T04:14:56Z"
  generation: 4
  labels:
    control-plane: controller-manager
  name: k6-operator-controller-manager
  namespace: k6-operator-system
  resourceVersion: "313185098"
  uid: 710fda5c-1c3d-43fd-b09b-08e14a505a10
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      control-plane: controller-manager
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        control-plane: controller-manager
    spec:
      containers:
      - args:
        - --secure-listen-address=0.0.0.0:8443
        - --upstream=http://127.0.0.1:8080/
        - --logtostderr=true
        - --v=10
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
        imagePullPolicy: IfNotPresent
        name: kube-rbac-proxy
        ports:
        - containerPort: 8443
          name: https
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - --metrics-addr=127.0.0.1:8080
        - --enable-leader-election
        command:
        - /manager
        image: ghcr.io/grafana/k6-operator
        imagePullPolicy: IfNotPresent
        name: manager
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 50Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-gitlab-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: k6-operator-controller
      serviceAccountName: k6-operator-controller
      terminationGracePeriodSeconds: 10
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2023-10-31T04:22:05Z"
    lastUpdateTime: "2023-10-31T04:22:05Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2023-10-31T04:14:56Z"
    lastUpdateTime: "2023-10-31T10:19:59Z"
    message: ReplicaSet "k6-operator-controller-manager-64c65bdbc5" has successfully
      progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 4
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Other environment details (if applicable)

No response

Steps to reproduce the problem

Use make deploy as mentioned in README.md

Expected behaviour

K6 Operator is able to find the required service

Actual behaviour

Not able to find the service via DNS record due to hardcoded URL

Operator Log:

2023-10-31T10:21:07Z ERROR controllers.K6 failed to get status from k6-sample-service 
github.com/grafana/k6-operator/controllers.isServiceReady                                       
/workspace/controllers/k6_start.go:23                                                       
github.com/grafana/k6-operator/controllers.StartJobs
@hexknight01 hexknight01 added the bug Something isn't working label Oct 31, 2023
@yorugac
Copy link
Collaborator

yorugac commented Jan 12, 2024

@nhatnam1198, this issue got a bit lost in the torrent of others, but it's actually a duplicate of an older issue #233 which was recently fixed. Please try the latest version of k6-operator: it should work. If it doesn't, please re-open with additional details. Thanks.

@yorugac yorugac closed this as completed Jan 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants