You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Locust worker configuration had to be modified, the current workers are 130 nodes. I exported the deployment as yaml file, edited the file and applied the modifications in the locust worker.
The workers have been restarted and re-initialized with the new configurations. The are all running with the new environment variable that I've modified previously.
The issue is that in the locust dashboard the count of the nodes has been doubled, respectively the workers have been restarted and when they got up, the locust UI has added it as a new node, but didn't delete the inactive one.
This is the current situation:
host-44-11-1-22:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config # kubectl get pods -o wide|wc -l
134
host-44-11-1-22:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config # kubectl get pods|grep Running|wc -l
133
host-44-11-1-22:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config #
Dashboard:
STATUS
HATCHING
85 users
Edit SLAVES
260
RPS
0
FAILURES
0%
 Reset Stats
StatisticsFailuresExceptions
Type Name # requests # fails Median Average Min Max Content Size # reqs/sec
Total 0 0 0 0 0 0 0 0
Download request statistics CSV
Download response time distribution CSV
What would be a quick re-initialization of the locust master to get the real number of nodes?
Thanks
The text was updated successfully, but these errors were encountered:
Locust worker configuration had to be modified, the current workers are 130 nodes. I exported the deployment as yaml file, edited the file and applied the modifications in the locust worker.
The workers have been restarted and re-initialized with the new configurations. The are all running with the new environment variable that I've modified previously.
The issue is that in the locust dashboard the count of the nodes has been doubled, respectively the workers have been restarted and when they got up, the locust UI has added it as a new node, but didn't delete the inactive one.
This is the current situation:
Dashboard:
STATUS
HATCHING
85 users
Edit
SLAVES
260
RPS
0
FAILURES
0%
 Reset Stats
StatisticsFailuresExceptions
Type Name # requests # fails Median Average Min Max Content Size # reqs/sec
Total 0 0 0 0 0 0 0 0
Download request statistics CSV
Download response time distribution CSV
What would be a quick re-initialization of the locust master to get the real number of nodes?
Thanks
The text was updated successfully, but these errors were encountered: