-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error: argument --master-port: invalid int value bug #1226
Comments
Hi! Use --master-host to specify the host (172.30.44.4) and --master-port to specify the port (8089) |
Go ahead and reopen this if this was not the case :) |
Hello. I'm not sure this solves the issue. Based on the documentation here to run locust in master mode you can use the following command The flags you are telling me to provide based on that documentation are to be used with the |
Sorry, I misread your ticket. Yes, you are correct, locust should use defaults. |
it does work for me though.
What is your docker run command line & relevant env vars? |
I'm actually seeing the same thing when trying to launch it in kubernetes.
If I launch a container with kubectl run, |
@cjbehm Can you list your env vars? I dont have kubernetes set up atm so I can't test it there... |
@cyberw sure; here's the block of the deployment yaml that I just ran so that I could get in and get the environment without modifying the image (thus the cmd & args elements)
And then the output of env. No idea where all those environment variables are being set from, definitely not from anything I configured.
|
If it helps, this is what docker pulled for the image
|
This variable is picked up by ConfigArgParse (introduced in #1167) and passed to locust LOCUST_MASTER_PORT=tcp://10.110.169.34:5557 I'll have a look at fixing it... |
I was looking at that (line 35 in main.py), but totally baffled on what file(s) it could be reading. |
Because all locust settings can now be set using env vars instead of command line we could remove LOCUST_OPTS and simplify docker_start.sh. |
Do you have a ./locust.conf file or ~/.locust.conf ? |
That's one of the things that's bizarre, I can't find any locust.conf in the docker image. I don't have on locally and those environment variables are somehow being set or populated even if I don't run locust (the cmd & args block I'm passing to k8s causes it to launch that rather than run docker_start.sh) If I have time, I'll try building the docker image from master and work my way back to a point in time where it those values aren't populated. I'm not sure where to even put debug info to see where those values coming from. |
I cant figure it out either... |
I'm just guessing here, but what about the extraEnvs setting in helm? https://github.com/helm/charts/blob/dc395f80d5e0a5e22ae3cc562b51b848c2e748f2/stable/locust/values.yaml#L31 |
(a locust.conf file wouldnt impact environment vars so they are coming from somewhere else) |
I'm not using helm for this; it's just a straight k8s manifest and kubectl apply. Even the IPs that are in the values aren't part of the system (and aren't even values in use outside of this docker k8s cluster). |
ok, cool, that narrows it down at least (and I'm kind of a helm-noob :) |
Hmm... I'm out of my depth here. I think the docker stuff could do with a rewrite now that env vars are supported out of the box in locust (removing LOCUST_OPTS for one). But I dont think I'm the best person to do it :) |
ahh ok. i see what's happening; it's a clash between auto-populated ENV variables that come from creating "Service" objects in k8s. I'm not sure what the best solution is; documentation could help but that's always fragile. Obviously short-term I can just just different names for the k8s objects. |
Aha... hmm... I guess we could detect typical k8s env vars (like LOCUST_MASTER_SERVICE) and log a warning, or (if we also detect that LOCUST_MASTER_PORT is set to a non-integer value) throw an error with a more descriptive message that hints at this problem. |
For something deterministic I'm really not sure what the right action is; locust explicitly wants to use environment variables and happens to use a naming pattern that people are likely to also use when naming the objects in a k8s manifest. |
If we detect LOCUST_MASTER_SERVICE and check if LOCUST_MASTER_PORT is a non-integer value, we can throw an exception telling users to use a different name. We could of course disable env var reading if we detect KUBERNETES_SERVICE_HOST, but that would change the behaviour in a very sneaky way, so I think it is better to just force the user to use a different name. |
hi @amribrahim ! just name your container something other than locust-master. I'm guessing your deployment.yaml looks something like this:
|
@cyberw here is the deployment file kind: ReplicationController
|
Sorry I didnt see your first comment :) Hmm... Sorry, I dont know what is the reason for the connection issue... |
I'm facing the same issue. Official Document said that --master-port is Optionally used together with --slave to set the port number of the master node (defaults to 5557). but it was wrong. The master-port is required even if the default value is used. If the master-port value is not set, the web port (8089) is used. |
@ohjongsung It sounds as you're running into the same issue as many others in this thread. It's caused by environment variables that is automatically set by Kubernetes (and potentially other tools). Read the other posts in this thread if you want to understand more. |
If the provided set of environment variable are for Locust Slave then it would be:
|
Even I'm running into the same issue of "locust: error: argument --master-port: invalid int value:" Also, a normal locust command without --master argument also throws the same error.apiVersion: v1
kind: Namespace
metadata:
name: locust-perf
labels:
name: locust-perf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: master-locust
namespace: locust-perf
labels:
name: master-locust
spec:
replicas: 1
selector:
matchLabels:
app: master-locust
template:
metadata:
labels:
app: master-locust
spec:
containers:
- name: master-locust
image: perf_locust:v0.9.5
imagePullPolicy: Never
stdin: true
tty: true
securityContext:
runAsUser: 0
command: ["/bin/bash"]
env:
- name: LOCUST_MODE
value: master
- name: TARGET_HOST
value: ''
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5555
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: locust-service
namespace: locust-perf
labels:
app: master-locust
role: master
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5555
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
selector:
app: master-locust
role: master
type: LoadBalancer |
@kiranbhadale: Could you manually run |
@heyman I have resolved the above issue by deleting the entire project namespace on k8s and creating everything from scratch. But after resolving the issue, when I'm trying to connect from slave machine, it is giving me following error: locust: error: Unexpected value for LOCUST_MASTER: 'master-locust'. Expecting 'true', 'false', 'yes', 'no', '1' or '0' Output for "env" on slave pod |
What happens if you rename the role to something else than "master"? I can't really see where the |
Since the metadata name for master is locust-master in the master yaml, i have set the same in the slave config as well. And it was working before in the 0.12.2 version, but due to gevent issue, i have upgraded locust to 0.14.6 and it has started failing on k8s cluster. Slave Yaml apiVersion: apps/v1
kind: Deployment
metadata:
name: locust-worker
namespace: locust-perf
labels:
name: locust-worker
spec:
replicas: 1
selector:
matchLabels:
app: locust-worker
template:
metadata:
labels:
app: locust-worker
spec:
containers:
- name: locust-worker
image: perf_locust:v0.9.5
imagePullPolicy: Never
tty: true
stdin: true
securityContext:
runAsUser: 0
command: ["/bin/bash"]
resources:
limits:
cpu: 500m
memory: 512Mi
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER
value: master-locust
|
You should not be setting the master hostname in |
Tried with LOCUST_MASTER_HOST and that is where slave is unable to connect master. it gets stuck here INFO/locust.main: Starting Locust 0.14.6. |
Did you provide LOCUST_MASTER_PORT value: "5557" in the Slave config file ??? |
I tried your suggestion but it still got stuck at "/INFO/locust.main: Starting Locust 0.14.6". Also, i thought if we don't provide the port, it should be picked by default. |
Is both your Locust Master and Locust Slave Pods Running ?? Can you try to create Environment variable as below in Locust Slave.
|
@wasimansari661 yes, my both slave and master pods are running. And I am running the locust file using make command which is a part of args in the yaml(e.g. make perf_test_slave target=<file_path>). So, passing the file_path and target_url won't help. |
Can you please check your Master Service configuration. |
I tried to ping two IP addresses, one for the pod and one for the cluster(service). Below yaml can be referred for pod and service.
apiVersion: v1
kind: Namespace
metadata:
name: locust-perf
labels:
name: locust-perf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: master-locust
namespace: locust-perf
labels:
name: master-locust
spec:
replicas: 1
selector:
matchLabels:
app: master-locust
template:
metadata:
labels:
app: master-locust
spec:
containers:
- name: master-locust
image: perf_locust:v0.9.5
imagePullPolicy: Never
stdin: true
tty: true
securityContext:
runAsUser: 0
command: ["/bin/bash"]
env:
- name: LOCUST_MODE
value: master
- name: TARGET_HOST
value: ''
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5555
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: locust-service
namespace: locust-perf
labels:
app: master-locust
role: master
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5555
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
selector:
app: master-locust
role: master
type: LoadBalancer |
@kiranbhadale I had the same issue with the workers not connecting to the master in k8s. What is the value of |
@yeoji, Initially I tried using the service name, but it didn't work. I guess there was some other problem. But below mentioned details helped me. Thanks. @mmarquezv Apart from this, I created a fresh docker image, deleted the performance namespace and started from scratch to avoid any conflicts from my previous builds. Master and Service:apiVersion: v1
kind: Namespace
metadata:
name: locust-perf
labels:
name: locust-perf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: lm-pod
namespace: locust-perf
labels:
name: lm-pod
spec:
replicas: 1
selector:
matchLabels:
app: lm-pod
template:
metadata:
labels:
app: lm-pod
spec:
containers:
- name: lm-pod
image: perf_locust:v0.9.7
imagePullPolicy: Never
stdin: true
tty: true
securityContext:
runAsUser: 0
command: ["/bin/bash","-c"]
args: [<make command to run my locust file>]
env:
- name: LOCUST_MODE
value: master
- name: TARGET_HOST
value: ''
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5555
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: lm-pod
namespace: locust-perf
labels:
app: lm-pod
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5555
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
selector:
app: lm-pod
type: LoadBalancer Slave YamlapiVersion: apps/v1
kind: Deployment
metadata:
name: lw-pod
namespace: locust-perf
labels:
name: lw-pod
spec:
replicas: 1
selector:
matchLabels:
app: lw-pod
template:
metadata:
labels:
app: lw-pod
spec:
containers:
- name: lw-pod
image: perf_locust:v0.9.7
imagePullPolicy: Never
tty: true
stdin: true
securityContext:
runAsUser: 0
command: ["/bin/bash","-c"]
args: [<make command to run my locust file>]
resources:
limits:
cpu: 500m
memory: 512Mi
env:
- name: LOCUST_MODE
value: slave
- name: LOCUST_MASTER_HOST
value: lm-pod |
That simply means that your slave/worker nodes haven't connected to the master node. This is most likely related to you Kubernetes config/setup.
I'm sorry you're struggling with getting Kubernetes to work. However, nothing what you've presented seems to suggest any actual issue with Locust. Also, please don't double post the same message requesting help in multiple issues. |
This should not happen any more now that we have renamed the env var for master port to LOCUST_MASTER_NODE_PORT (the original issue I mean, some of the followup discussion here might still apply, but that should be a separate ticket) |
Describe the bug
locust: error: argument --master-port: invalid int value: 'tcp://172.30.44.4:8089'
Expected behavior
Locust runs correctly without producing this error. As
--master-port
wasn't even supplied.Actual behavior
When invoking locust in master mode using the following:
locust -f /scripts/locust-script.py --master
the following error occurslocust: error: argument --master-port: invalid int value: 'tcp://172.30.44.4:8089'
Steps to reproduce
Start locust in the container using the command:
locust -f /scripts/locust-script.py --master
Error occurs.
Environment settings
The text was updated successfully, but these errors were encountered: