Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: argument --master-port: invalid int value bug #1226

Closed
dbrennand opened this issue Jan 15, 2020 · 53 comments
Closed

error: argument --master-port: invalid int value bug #1226

dbrennand opened this issue Jan 15, 2020 · 53 comments

Comments

@dbrennand
Copy link

Describe the bug

locust: error: argument --master-port: invalid int value: 'tcp://172.30.44.4:8089'

Expected behavior

Locust runs correctly without producing this error. As --master-port wasn't even supplied.

Actual behavior

When invoking locust in master mode using the following: locust -f /scripts/locust-script.py --master the following error occurs locust: error: argument --master-port: invalid int value: 'tcp://172.30.44.4:8089'

Steps to reproduce

Start locust in the container using the command: locust -f /scripts/locust-script.py --master
Error occurs.

Environment settings

  • OS: RHEL
  • Python version: Python 3.6 from image registry.redhat.io/rhel8/python-36
  • Locust version: Using latest.
@dbrennand dbrennand added the bug label Jan 15, 2020
@cyberw
Copy link
Collaborator

cyberw commented Jan 15, 2020

Hi! Use --master-host to specify the host (172.30.44.4) and --master-port to specify the port (8089)

@cyberw cyberw closed this as completed Jan 15, 2020
@cyberw cyberw added invalid and removed bug labels Jan 15, 2020
@cyberw
Copy link
Collaborator

cyberw commented Jan 15, 2020

Go ahead and reopen this if this was not the case :)

@dbrennand
Copy link
Author

dbrennand commented Jan 21, 2020

Hello. I'm not sure this solves the issue. Based on the documentation here to run locust in master mode you can use the following command locust -f my_locustfile.py --master.

The flags you are telling me to provide based on that documentation are to be used with the --slave flag.
Furthermore, based on that documentation, I believe the equivalent flags are: --master-bind-host=X.X.X.X and --master-bind-port=5557. However, the documentation states that I should just be able to supply locust -f my_locustfile.py --master and no other flags and locust should take care of the rest. However, it results in the error I am describing above and why I believe it is a bug.

@cyberw
Copy link
Collaborator

cyberw commented Jan 21, 2020

Sorry, I misread your ticket. Yes, you are correct, locust should use defaults.

@cyberw cyberw reopened this Jan 21, 2020
@cyberw
Copy link
Collaborator

cyberw commented Jan 21, 2020

it does work for me though.

docker run -it --entrypoint=/bin/sh -p 8089:8089 --volume $PWD:/mnt/locust locustio/locust:0.13.5
/ $ cd /mnt/locust/
/mnt/locust $ locust -f rps.py --master
[2020-01-21 11:30:32,969] 53758910f649/INFO/locust.main: Starting web monitor at http://*:8089
[2020-01-21 11:30:32,969] 53758910f649/INFO/locust.main: Starting Locust 0.13.5

What is your docker run command line & relevant env vars?

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

I'm actually seeing the same thing when trying to launch it in kubernetes.
docker_start.sh is resulting in the following output, even if LOCUST_MODE is set to standalone

Starting Locust in standalone mode...
$ locust  --print-stats -f /locust/locustfile.py -H http://localhost
usage: locust [-h] [-H HOST] [--web-host WEB_HOST] [-P PORT] [-f LOCUSTFILE]
              [--csv CSVFILEBASE] [--csv-full-history] [--master] [--slave]
              [--master-host MASTER_HOST] [--master-port MASTER_PORT]
              [--master-bind-host MASTER_BIND_HOST]
              [--master-bind-port MASTER_BIND_PORT]
              [--heartbeat-liveness HEARTBEAT_LIVENESS]
              [--heartbeat-interval HEARTBEAT_INTERVAL]
              [--expect-slaves EXPECT_SLAVES] [--no-web] [-c NUM_CLIENTS]
              [-r HATCH_RATE] [-t RUN_TIME] [--skip-log-setup] [--step-load]
              [--step-clients STEP_CLIENTS] [--step-time STEP_TIME]
              [--loglevel LOGLEVEL] [--logfile LOGFILE] [--print-stats]
              [--only-summary] [--no-reset-stats] [--reset-stats] [-l]
              [--show-task-ratio] [--show-task-ratio-json] [-V]
              [--exit-code-on-error EXIT_CODE_ON_ERROR] [-s STOP_TIMEOUT]
              [LocustClass [LocustClass ...]]
locust: error: argument --master-port: invalid int value: 'tcp://10.96.24.182:5557'

If I launch a container with kubectl run, kubectl run shell --rm -i --tty --image locustio/locust:0.13.5 -- ash and then I can set environment variables and execute docker_start.sh and it's fine

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

@cjbehm Can you list your env vars? I dont have kubernetes set up atm so I can't test it there...

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

@cyberw sure; here's the block of the deployment yaml that I just ran so that I could get in and get the environment without modifying the image (thus the cmd & args elements)

- image: locustio/locust:0.13.5
        imagePullPolicy: Always
        name: locust-master
        env:
          - name: TARGET_URL
            value: "http://localhost"
          - name: LOCUSTFILE_PATH
            value: /locust/locustfile.py
          - name: LOCUST_MODE
            value: standalone
          - name: LOCUST_OPTS
            value: --print-stats
        command: ["/bin/ash"]
        args: ["-c", "while true; do sleep 10;done"]
        volumeMounts:
          - mountPath: /locust
            name: locust-scripts
        ports:
        - containerPort: 5557
          name: comm
        - containerPort: 5558
          name: comm-plus-1
        - containerPort: 8089

And then the output of env. No idea where all those environment variables are being set from, definitely not from anything I configured.

/ $ env
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=locust-master-6b478cfccd-dzb75
LOCUST_OPTS=--print-stats
PYTHON_PIP_VERSION=19.3.1
SHLVL=1
LOCUST_MASTER_SERVICE_HOST=10.110.169.34
HOME=/home/locust
GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
LOCUST_MODE=standalone
LOCUST_MASTER_PORT_5557_TCP_ADDR=10.110.169.34
LOCUST_MASTER_PORT_5558_TCP_ADDR=10.110.169.34
LOCUST_MASTER_SERVICE_PORT=5557
LOCUST_MASTER_PORT=tcp://10.110.169.34:5557
LOCUST_MASTER_PORT_5557_TCP_PORT=5557
LOCUST_MASTER_SERVICE_PORT_COMMUNICATION_PLUS_1=5558
LOCUST_MASTER_PORT_8089_TCP_ADDR=10.110.169.34
LOCUST_MASTER_PORT_5558_TCP_PORT=5558
LOCUST_MASTER_PORT_5557_TCP_PROTO=tcp
LOCUST_MASTER_PORT_5558_TCP_PROTO=tcp
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/ffe826207a010164265d9cc807978e3604d18ca0/get-pip.py
TERM=xterm
LOCUST_MASTER_PORT_8089_TCP_PORT=8089
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
LOCUST_MASTER_PORT_8089_TCP_PROTO=tcp
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LOCUST_MASTER_PORT_5557_TCP=tcp://10.110.169.34:5557
LANG=C.UTF-8
LOCUST_MASTER_PORT_5558_TCP=tcp://10.110.169.34:5558
LOCUST_MASTER_PORT_8089_TCP=tcp://10.110.169.34:8089
PYTHON_VERSION=3.6.9
TARGET_URL=http://localhost
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
LOCUSTFILE_PATH=/locust/locustfile.py
PYTHON_GET_PIP_SHA256=b86f36cc4345ae87bfd4f10ef6b2dbfa7a872fbff70608a1e43944d283fd0eee
LOCUST_MASTER_SERVICE_PORT_COMMUNICATION=5557
LOCUST_MASTER_SERVICE_PORT_WEB_UI=8089

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

If it helps, this is what docker pulled for the image

locustio/locust                      0.13.5              c086f29b5633        5 weeks ago 

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

This variable is picked up by ConfigArgParse (introduced in #1167) and passed to locust

LOCUST_MASTER_PORT=tcp://10.110.169.34:5557

I'll have a look at fixing it...

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

I was looking at that (line 35 in main.py), but totally baffled on what file(s) it could be reading.

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

Because all locust settings can now be set using env vars instead of command line we could remove LOCUST_OPTS and simplify docker_start.sh.

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

Do you have a ./locust.conf file or ~/.locust.conf ?

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

That's one of the things that's bizarre, I can't find any locust.conf in the docker image. I don't have on locally and those environment variables are somehow being set or populated even if I don't run locust (the cmd & args block I'm passing to k8s causes it to launch that rather than run docker_start.sh)

If I have time, I'll try building the docker image from master and work my way back to a point in time where it those values aren't populated. I'm not sure where to even put debug info to see where those values coming from.

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

I cant figure it out either...

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

I'm just guessing here, but what about the extraEnvs setting in helm? https://github.com/helm/charts/blob/dc395f80d5e0a5e22ae3cc562b51b848c2e748f2/stable/locust/values.yaml#L31

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

(a locust.conf file wouldnt impact environment vars so they are coming from somewhere else)

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

I'm not using helm for this; it's just a straight k8s manifest and kubectl apply.

Even the IPs that are in the values aren't part of the system (and aren't even values in use outside of this docker k8s cluster).

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

ok, cool, that narrows it down at least (and I'm kind of a helm-noob :)

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

Hmm... I'm out of my depth here. I think the docker stuff could do with a rewrite now that env vars are supported out of the box in locust (removing LOCUST_OPTS for one).

But I dont think I'm the best person to do it :)

@cjbehm
Copy link

cjbehm commented Jan 24, 2020

ahh ok. i see what's happening; it's a clash between auto-populated ENV variables that come from creating "Service" objects in k8s.
It only clicked now, but naming a Service "locust-master" in k8s results in an environment value that is "LOCUST_MASTER_SERVICE" (and a bunch of other values). And those auto names happen to overlap some of the names that locust looks at.

I'm not sure what the best solution is; documentation could help but that's always fragile. Obviously short-term I can just just different names for the k8s objects.

@cyberw
Copy link
Collaborator

cyberw commented Jan 24, 2020

Aha... hmm... I guess we could detect typical k8s env vars (like LOCUST_MASTER_SERVICE) and log a warning, or (if we also detect that LOCUST_MASTER_PORT is set to a non-integer value) throw an error with a more descriptive message that hints at this problem.

@cjbehm
Copy link

cjbehm commented Jan 25, 2020

For something deterministic KUBERNETES_SERVICE_HOST is always set for containers running in k8s.

I'm really not sure what the right action is; locust explicitly wants to use environment variables and happens to use a naming pattern that people are likely to also use when naming the objects in a k8s manifest.

@cyberw
Copy link
Collaborator

cyberw commented Jan 25, 2020

If we detect LOCUST_MASTER_SERVICE and check if LOCUST_MASTER_PORT is a non-integer value, we can throw an exception telling users to use a different name. We could of course disable env var reading if we detect KUBERNETES_SERVICE_HOST, but that would change the behaviour in a very sneaky way, so I think it is better to just force the user to use a different name.

@amribrahim
Copy link

so i have same issue but i did not understand how to solve it
image

@cyberw
Copy link
Collaborator

cyberw commented Feb 13, 2020

hi @amribrahim ! just name your container something other than locust-master. I'm guessing your deployment.yaml looks something like this:

- image: locustio/locust:0.13.5
        imagePullPolicy: Always
        name: locust-master
...

@amribrahim
Copy link

hey man thank you it run but the issue the master did not connect to slave now
image

@amribrahim
Copy link

@cyberw here is the deployment file

kind: ReplicationController
apiVersion: v1
metadata:
name: locust-first
labels:
name: locust
role: master
spec:
replicas: 1
selector:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: amribrahim00/locust-floranow:1.7
env:
- name: LOCUST_MODE
value: master
- name: TARGET_HOST
value: https://marketplace-dev.floranow.com
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP

kind: ReplicationController
apiVersion: v1
metadata:
name: locust-worker
labels:
name: locust
role: worker
spec:
replicas: 20
selector:
name: locust
role: worker
template:
metadata:
labels:
name: locust
role: worker
spec:
containers:
- name: locust
image: amribrahim00/locust-floranow:1.7
env:
- name: LOCUST_MODE
value: worker
- name: MASTER_HOST
value: locust-first
- name: MASTER_PORT
value: '5557'
- name: TARGET_HOST
value: https://marketplace-dev.floranow.com

kind: Service
apiVersion: v1
metadata:
name: locust-service
labels:
name: locust
role: master
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5558
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
selector:
name: locust
role: master
type: NodePort

@cyberw
Copy link
Collaborator

cyberw commented Feb 13, 2020

There are some containers named locust in there, try changing that

Sorry I didnt see your first comment :)

Hmm... Sorry, I dont know what is the reason for the connection issue...

@ohjongsung
Copy link

I'm facing the same issue. Official Document said that --master-port is Optionally used together with --slave to set the port number of the master node (defaults to 5557). but it was wrong. The master-port is required even if the default value is used. If the master-port value is not set, the web port (8089) is used.

@heyman
Copy link
Member

heyman commented Apr 24, 2020

@ohjongsung It sounds as you're running into the same issue as many others in this thread. It's caused by environment variables that is automatically set by Kubernetes (and potentially other tools). Read the other posts in this thread if you want to understand more.

@wasimansari661
Copy link

Don't know if it's correct, but my nodes stopped crash looping when I've setted the following environment variable in worker and master, like this:
env: - name: LOCUST_MODE value: master - name: TARGET_URL value: https://api-tes.com - name: LOCUST_MASTER_PORT value: "5557" - name: LOCUSTFILE_PATH value: /locust/locustfile.py
But now I'm stuck in another problem that isn't related to this. The web UI loads OK, but when I start the load test, nothing happens neither nothing is logged.

It's because the slaves can't see the master node...

If the provided set of environment variable are for Locust Slave then it would be:

  • name: LOCUST_MODE value: slave
  • name: LOCUST_MASTER_HOST value: hostname of Master Node
  • name: LOCUST_MASTER_PORT value: "5557"
  • name: LOCUSTFILE_PATH value: /locust/locustfile.py
  • name: TARGET_URL value: https://api-tes.com

@kiranbhadale
Copy link

kiranbhadale commented Apr 27, 2020

Even I'm running into the same issue of "locust: error: argument --master-port: invalid int value:"
I went through all the comments and possibly did all the necessary changes mentioned above. Below is my master yaml file which I run on K8s. After executed the below yaml, i login into the pod, and manually try to run the locust file. I have upgraded locust to latest version(0.14.6)

Also, a normal locust command without --master argument also throws the same error.

apiVersion: v1
kind: Namespace
metadata:
  name: locust-perf
  labels:
    name: locust-perf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: master-locust
  namespace: locust-perf
  labels:
    name: master-locust
spec:
  replicas: 1
  selector:
    matchLabels:
      app: master-locust
  template:
    metadata:
      labels:
        app: master-locust
    spec:
      containers:
        - name: master-locust
          image: perf_locust:v0.9.5
          imagePullPolicy: Never
          stdin: true
          tty: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash"]
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_HOST
              value: ''
          ports:
            - name: loc-master-web
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5555
              protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: locust-service
  namespace: locust-perf
  labels:
    app: master-locust
    role: master
spec:
  ports:
    - port: 8089
      targetPort: loc-master-web
      protocol: TCP
      name: loc-master-web
    - port: 5557
      targetPort: loc-master-p1
      protocol: TCP
      name: loc-master-p1
    - port: 5555
      targetPort: loc-master-p2
      protocol: TCP
      name: loc-master-p2
  selector:
    app: master-locust
    role: master
  type: LoadBalancer

@heyman
Copy link
Member

heyman commented Apr 27, 2020

@kiranbhadale: Could you manually run env (to list all available environment variables) in the container where you get that error, and paste the output?

@kiranbhadale
Copy link

kiranbhadale commented Apr 28, 2020

@heyman I have resolved the above issue by deleting the entire project namespace on k8s and creating everything from scratch. But after resolving the issue, when I'm trying to connect from slave machine, it is giving me following error:

locust: error: Unexpected value for LOCUST_MASTER: 'master-locust'. Expecting 'true', 'false', 'yes', 'no', '1' or '0'

Output for "env" on slave pod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
LOCUST_SERVICE_PORT_5555_TCP_ADDR=10.96.31.110
LOCUST_SERVICE_PORT_5557_TCP_ADDR=10.96.31.110
HOSTNAME=locust-worker-c48fb4d4-qccmk
LOCUST_SERVICE_PORT_5555_TCP_PORT=5555
LOCUST_SERVICE_SERVICE_PORT=8089
LOCUST_SERVICE_PORT=tcp://10.96.31.110:8089
LOCUST_SERVICE_PORT_5555_TCP_PROTO=tcp
PYTHON_PIP_VERSION=19.1.1
LOCUST_SERVICE_PORT_8089_TCP_ADDR=10.96.31.110
LOCUST_SERVICE_PORT_5557_TCP_PORT=5557
HOME=/root
LOCUST_SERVICE_PORT_5557_TCP_PROTO=tcp
LOCUST_SERVICE_SERVICE_PORT_LOC_MASTER_P1=5557
LOCUST_SERVICE_SERVICE_PORT_LOC_MASTER_P2=5555
LOCUST_SERVICE_PORT_8089_TCP_PORT=8089
GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
LOCUST_SERVICE_PORT_8089_TCP_PROTO=tcp
LOCUST_MODE=worker
LOCUST_SERVICE_PORT_5555_TCP=tcp://10.96.31.110:5555
LOCUST_SERVICE_PORT_5557_TCP=tcp://10.96.31.110:5557
TERM=xterm
LOCUST_SERVICE_PORT_8089_TCP=tcp://10.96.31.110:8089
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LOCUST_MASTER=master-locust
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LOCUST_SERVICE_SERVICE_PORT_LOC_MASTER_WEB=8089
DISPLAY=:99
LANG=C.UTF-8
DEBIAN_FRONTEND=noninteractive
PYTHON_VERSION=3.7.3
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
LOCUST_SERVICE_SERVICE_HOST=10.96.31.110

@heyman
Copy link
Member

heyman commented Apr 28, 2020

What happens if you rename the role to something else than "master"? I can't really see where the MASTER_ prefix would be coming from, if not that.

@kiranbhadale
Copy link

kiranbhadale commented Apr 28, 2020

Since the metadata name for master is locust-master in the master yaml, i have set the same in the slave config as well. And it was working before in the 0.12.2 version, but due to gevent issue, i have upgraded locust to 0.14.6 and it has started failing on k8s cluster.

Slave Yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: locust-worker
  namespace: locust-perf
  labels:    
    name: locust-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: locust-worker
  template:
    metadata:
      labels:
        app: locust-worker
    spec:
      containers:
        - name: locust-worker
          image: perf_locust:v0.9.5
          imagePullPolicy: Never
          tty: true
          stdin: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash"]
          resources: 
            limits: 
              cpu: 500m 
              memory: 512Mi
          env:
            - name: LOCUST_MODE
              value: worker
            - name: LOCUST_MASTER
              value: master-locust

@heyman
Copy link
Member

heyman commented Apr 28, 2020

You should not be setting the master hostname in LOCUST_MASTER. Use LOCUST_MASTER_HOST for that.

@kiranbhadale
Copy link

kiranbhadale commented Apr 28, 2020

Tried with LOCUST_MASTER_HOST and that is where slave is unable to connect master. it gets stuck here INFO/locust.main: Starting Locust 0.14.6.
The master pod keeps waiting for the slave to connect.

@wasimansari661
Copy link

wasimansari661 commented Apr 28, 2020

Tried with LOCUST_MASTER_HOST and that is where slave is unable to connect master. it gets stuck here INFO/locust.main: Starting Locust 0.14.6.
The master pod keeps waiting for the slave to connect.

Did you provide LOCUST_MASTER_PORT value: "5557" in the Slave config file ???

@kiranbhadale
Copy link

I tried your suggestion but it still got stuck at "/INFO/locust.main: Starting Locust 0.14.6". Also, i thought if we don't provide the port, it should be picked by default.

@wasimansari661
Copy link

@kiranbhadale,

Is both your Locust Master and Locust Slave Pods Running ??

Can you try to create Environment variable as below in Locust Slave.

  • name: LOCUST_MODE value: slave
  • name: LOCUST_MASTER_HOST value: locust-service name
  • name: LOCUST_MASTER_PORT value: "5557"
  • name: LOCUSTFILE_PATH value: /path/locust/locustfile.py
  • name: TARGET_URL value: https://target.url

@kiranbhadale
Copy link

@wasimansari661 yes, my both slave and master pods are running. And I am running the locust file using make command which is a part of args in the yaml(e.g. make perf_test_slave target=<file_path>). So, passing the file_path and target_url won't help.
Also, I tried to run the locust file by manually logging in into the slave pod and running the standard locust slave command. But in either case the result remained same.

@wasimansari661
Copy link

@kiranbhadale,

Can you please check your Master Service configuration.
Ping your Master-Service IP from any other POD and check if you are getting response.

@kiranbhadale
Copy link

I tried to ping two IP addresses, one for the pod and one for the cluster(service). Below yaml can be referred for pod and service.

  1. Extracted the IP address of the pod using kubectl get pod -o wide -n locust-perf. This gave the ip address of the master pod and I was able to ping from the slave pod.

  2. The I extracted the IP address of the service, i.e. the cluster IP and the slave pod was unable to ping the cluster ip. Command used to extract the IP: kubectl get services -n locust-perf

apiVersion: v1
kind: Namespace
metadata:
  name: locust-perf
  labels:
    name: locust-perf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: master-locust
  namespace: locust-perf
  labels:
    name: master-locust
spec:
  replicas: 1
  selector:
    matchLabels:
      app: master-locust
  template:
    metadata:
      labels:
        app: master-locust
    spec:
      containers:
        - name: master-locust
          image: perf_locust:v0.9.5
          imagePullPolicy: Never
          stdin: true
          tty: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash"]
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_HOST
              value: ''
          ports:
            - name: loc-master-web
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5555
              protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: locust-service
  namespace: locust-perf
  labels:
    app: master-locust
    role: master
spec:
  ports:
    - port: 8089
      targetPort: loc-master-web
      protocol: TCP
      name: loc-master-web
    - port: 5557
      targetPort: loc-master-p1
      protocol: TCP
      name: loc-master-p1
    - port: 5555
      targetPort: loc-master-p2
      protocol: TCP
      name: loc-master-p2
  selector:
    app: master-locust
    role: master
  type: LoadBalancer

@yeoji
Copy link

yeoji commented Apr 28, 2020

@kiranbhadale I had the same issue with the workers not connecting to the master in k8s.

What is the value of LOCUST_MASTER_HOST in your worker configuration? It has to be the name of the locust service, which in your case, looks to be locust-service.

@kiranbhadale
Copy link

kiranbhadale commented Apr 28, 2020

@yeoji, Initially I tried using the service name, but it didn't work. I guess there was some other problem. But below mentioned details helped me. Thanks.

@mmarquezv
I tried a few permutation combinations and finally one worked. I kept the meta-data name same for master and its service. below are my configs and they work like a charm. Since I have a few customizations in my implementation, I am not using k8s locust integrated parameters to run locust file. Instead, I'm using ,make command to run my locust. My Master and Service configs are in the same yaml file whereas slave is in a separate file. I hope this helps. And for testing purpose, I have executed below files on minikube which shouldn't be problem to replicate on cloud I suppose.

Apart from this, I created a fresh docker image, deleted the performance namespace and started from scratch to avoid any conflicts from my previous builds.

Master and Service:

apiVersion: v1
kind: Namespace
metadata:
  name: locust-perf
  labels:
    name: locust-perf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    name: lm-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lm-pod
  template:
    metadata:
      labels:
        app: lm-pod
    spec:
      containers:
        - name: lm-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          stdin: true
          tty: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_HOST
              value: ''
          ports:
            - name: loc-master-web
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5555
              protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    app: lm-pod
spec:
  ports:
    - port: 8089
      targetPort: loc-master-web
      protocol: TCP
      name: loc-master-web
    - port: 5557
      targetPort: loc-master-p1
      protocol: TCP
      name: loc-master-p1
    - port: 5555
      targetPort: loc-master-p2
      protocol: TCP
      name: loc-master-p2
  selector:
    app: lm-pod
  type: LoadBalancer

Slave Yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: lw-pod
  namespace: locust-perf
  labels:
    name: lw-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lw-pod
  template:
    metadata:
      labels:
        app: lw-pod
    spec:
      containers:
        - name: lw-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          tty: true
          stdin: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER_HOST
              value: lm-pod

@heyman
Copy link
Member

heyman commented Apr 29, 2020

@mmarquezv:

Has anyone solved the "no slaves servers connected" problem?

That simply means that your slave/worker nodes haven't connected to the master node. This is most likely related to you Kubernetes config/setup.

Right now I'm thinking if there's a better alternative than Locust. I'm sad because I thought this was a good tool for my load tests.

I'm sorry you're struggling with getting Kubernetes to work. However, nothing what you've presented seems to suggest any actual issue with Locust. Also, please don't double post the same message requesting help in multiple issues.

@cyberw
Copy link
Collaborator

cyberw commented May 22, 2020

This should not happen any more now that we have renamed the env var for master port to LOCUST_MASTER_NODE_PORT (the original issue I mean, some of the followup discussion here might still apply, but that should be a separate ticket)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants