Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework loadbalancer server selection logic #11329

Merged
merged 12 commits into from
Dec 6, 2024

Conversation

brandond
Copy link
Member

@brandond brandond commented Nov 15, 2024

Proposed Changes

This PR does the following:

  • groups related functions into separate files
  • simplifies the LoadBalancer public functions to be more consistent with regards to naming
  • moves the load-balancer server list into a new type, does away with a number of redundant shared-state vars (localServerURL, defaultServerAddress, randomServers, currentServerAddress, nextServerIndex), and reworks the dialer and health-checking logic to use the sorted list maintained by that type.
  • Adds a fallback check to retrieve apiserver endpoints from the server address, in case of a total apiserver outage.
  • Fixes issues with tests that made it difficult to diagnose issues while testing the above changes.

This should be easier to test and maintain, and provide more consistent behavior:

  1. All connections are sent to the same server (the active server), as long as it passes health checks and can be dialed.
  2. If a server fails health checks or cannot be dialed, we want to pick a new active server, and close other connections to the failed node so that they reconnect to the new server.
  3. The new active server should be picked from the list of servers in order of preference:
    1. Servers that recently came back from failed, as they may have been taken down for patching and are less likely to go down again
    2. Servers that are passing health checks
    3. The default server (if it wouldn't otherwise be present in the server list)
    4. Servers that are were failed, but passed a single health check or were dialed successfully
    5. Servers that failed their most recent health check or dial
Server state preference order:
flowchart LR
Active --> Preferred --> Healthy --> Unchecked --> Standby --> Recovering --> Failed
Loading
Possible state changes:
stateDiagram-v2
[*] --> Unchecked
[*] --> Standby
Unchecked --> Recovering
Unchecked --> Failed
Unchecked --> Invalid
Failed --> Recovering
Failed --> Invalid
Recovering --> Preferred
Recovering --> Active
Recovering --> Failed
Recovering  --> Invalid
Standby --> Unchecked
Standby --> Invalid
Healthy --> Failed
Healthy --> Active
Healthy --> Invalid
Preferred --> Failed
Preferred --> Healthy
Preferred --> Active
Preferred --> Invalid
Active --> Failed
Active --> Invalid
Invalid --> [*]
Loading
Logging

Health check state transitions are also logged at INFO level, for better visibility when not running with debug logging enabled:

Nov 17 20:45:32 systemd-node-2 systemd[1]: Starting Lightweight Kubernetes...

Nov 17 20:45:33 systemd-node-2 k3s[362]: time="2024-11-17T20:45:33Z" level=info msg="Updated load balancer k3s-agent-load-balancer default server: 172.17.0.4:6443"
Nov 17 20:45:33 systemd-node-2 k3s[362]: time="2024-11-17T20:45:33Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.17.0.7:6443"
Nov 17 20:45:33 systemd-node-2 k3s[362]: time="2024-11-17T20:45:33Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.17.0.7:6443] [default: 172.17.0.4:6443]"
Nov 17 20:45:33 systemd-node-2 k3s[362]: time="2024-11-17T20:45:33Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [172.17.0.7:6443] [default: 172.17.0.4:6443]"
Nov 17 20:45:33 systemd-node-2 k3s[362]: time="2024-11-17T20:45:33Z" level=info msg="Server 172.17.0.7:6443@PREFERRED->ACTIVE from successful dial"

Nov 17 20:45:47 systemd-node-2 k3s[362]: time="2024-11-17T20:45:47Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.17.0.8:6443"
Nov 17 20:45:47 systemd-node-2 k3s[362]: time="2024-11-17T20:45:47Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.17.0.7:6443 172.17.0.8:6443] [default: 172.17.0.4:6443]"

Nov 17 20:45:49 systemd-node-2 k3s[362]: time="2024-11-17T20:45:49Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:45:49 systemd-node-2 k3s[362]: time="2024-11-17T20:45:49Z" level=info msg="Connecting to proxy" url="wss://172.17.0.8:6443/v1-k3s/connect"
Nov 17 20:45:49 systemd-node-2 k3s[362]: time="2024-11-17T20:45:49Z" level=info msg="Remotedialer connected to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:45:49 systemd-node-2 k3s[362]: time="2024-11-17T20:45:49Z" level=info msg="Remotedialer connected to proxy" url="wss://172.17.0.8:6443/v1-k3s/connect"
Nov 17 20:46:48 systemd-node-2 k3s[362]: time="2024-11-17T20:46:48Z" level=info msg="Server 172.17.0.8:6443@PREFERRED->HEALTHY from successful health check"

Nov 17 20:46:53 systemd-node-2 systemd[1]: Starting Lightweight Kubernetes...

Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Updated load balancer k3s-agent-load-balancer default server: 172.17.0.4:6443"
Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.17.0.8:6443"
Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.17.0.7:6443"
Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.17.0.8:6443 172.17.0.7:6443] [default: 172.17.0.4:6443]"
Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [172.17.0.8:6443 172.17.0.7:6443] [default: 172.17.0.4:6443]"
Nov 17 20:47:02 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:02Z" level=info msg="Server 172.17.0.8:6443@PREFERRED->ACTIVE from successful dial"

Nov 17 20:47:09 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:09Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:09 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:09Z" level=info msg="Connecting to proxy" url="wss://172.17.0.8:6443/v1-k3s/connect"
Nov 17 20:47:09 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:09Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 172.17.0.7:6443: connect: connection refused" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:09 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:09Z" level=info msg="Remotedialer connected to proxy" url="wss://172.17.0.8:6443/v1-k3s/connect"
Nov 17 20:47:09 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:09Z" level=info msg="Server 172.17.0.7:6443@PREFERRED->FAILED from failed health check"

Nov 17 20:47:10 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:10Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:10 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:10Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 172.17.0.7:6443: connect: connection refused" url="wss://172.17.0.7:6443/v1-k3s/connect"

Nov 17 20:47:11 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:11Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:11 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:11Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 172.17.0.7:6443: connect: connection refused" url="wss://172.17.0.7:6443/v1-k3s/connect"

Nov 17 20:47:12 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:12Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:12 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:12Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 172.17.0.7:6443: connect: connection refused" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:47:12 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:12Z" level=info msg="Removing server from load balancer k3s-agent-load-balancer: 172.17.0.7:6443"
Nov 17 20:47:12 systemd-node-2 k3s[1190]: time="2024-11-17T20:47:12Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.17.0.8:6443] [default: 172.17.0.4:6443]"

Nov 17 20:49:50 systemd-node-2 k3s[1190]: time="2024-11-17T20:49:50Z" level=info msg="Adding server to load balancer k3s-agent-load-balancer: 172.17.0.7:6443"
Nov 17 20:49:50 systemd-node-2 k3s[1190]: time="2024-11-17T20:49:50Z" level=info msg="Updated load balancer k3s-agent-load-balancer server addresses -> [172.17.0.8:6443 172.17.0.7:6443] [default: 172.17.0.4:6443]"
Nov 17 20:49:50 systemd-node-2 k3s[1190]: time="2024-11-17T20:49:50Z" level=info msg="Connecting to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"
Nov 17 20:49:50 systemd-node-2 k3s[1190]: time="2024-11-17T20:49:50Z" level=info msg="Remotedialer connected to proxy" url="wss://172.17.0.7:6443/v1-k3s/connect"

Nov 17 20:50:50 systemd-node-2 k3s[1190]: time="2024-11-17T20:50:50Z" level=info msg="Server 172.17.0.7:6443@PREFERRED->HEALTHY from successful health check"
New Metrics

Note: This requires starting servers with the --supervisor-metrics flag to enable serving metrics on the supervisor port for both servers and agents.

brandond@dev01:~$ kubectl get --server https://172.17.0.5:6443 --raw /metrics | grep loadbalancer
kubectl get --server https://172.17.0.9:6443 --raw /metrics | grep loadbalancer
# HELP k3s_loadbalancer_dial_duration_seconds Time taken to dial a connection to a backend server
# TYPE k3s_loadbalancer_dial_duration_seconds histogram
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.001"} 33
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.002"} 35
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.004"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.008"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.016"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.032"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.064"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.128"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.256"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="0.512"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="1.024"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="2.048"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="4.096"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="8.192"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="16.384"} 36
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-agent-load-balancer",status="success",le="+Inf"} 36
k3s_loadbalancer_dial_duration_seconds_sum{name="k3s-agent-load-balancer",status="success"} 0.011763531999999997
k3s_loadbalancer_dial_duration_seconds_count{name="k3s-agent-load-balancer",status="success"} 36
# HELP k3s_loadbalancer_server_connections Count of current connections to loadbalancer server
# TYPE k3s_loadbalancer_server_connections gauge
k3s_loadbalancer_server_connections{name="k3s-agent-load-balancer",server="172.17.0.4:6443"} 0
k3s_loadbalancer_server_connections{name="k3s-agent-load-balancer",server="172.17.0.7:6443"} 0
k3s_loadbalancer_server_connections{name="k3s-agent-load-balancer",server="172.17.0.8:6443"} 4
# HELP k3s_loadbalancer_server_health Current health value of loadbalancer server
# TYPE k3s_loadbalancer_server_health gauge
k3s_loadbalancer_server_health{name="k3s-agent-load-balancer",server="172.17.0.4:6443"} 2
k3s_loadbalancer_server_health{name="k3s-agent-load-balancer",server="172.17.0.7:6443"} 5
k3s_loadbalancer_server_health{name="k3s-agent-load-balancer",server="172.17.0.8:6443"} 7


brandond@dev01:~$ kubectl get --server https://172.17.0.7:6443 --raw /metrics | grep loadbalancer
# HELP k3s_loadbalancer_dial_duration_seconds Time taken to dial a connection to a backend server
# TYPE k3s_loadbalancer_dial_duration_seconds histogram
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.001"} 189
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.002"} 189
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.004"} 191
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.008"} 192
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.016"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.032"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.064"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.128"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.256"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="0.512"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="1.024"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="2.048"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="4.096"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="8.192"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="16.384"} 193
k3s_loadbalancer_dial_duration_seconds_bucket{name="k3s-etcd-server-load-balancer",status="success",le="+Inf"} 193
k3s_loadbalancer_dial_duration_seconds_sum{name="k3s-etcd-server-load-balancer",status="success"} 0.05489411500000002
k3s_loadbalancer_dial_duration_seconds_count{name="k3s-etcd-server-load-balancer",status="success"} 193
# HELP k3s_loadbalancer_server_connections Count of current connections to loadbalancer server
# TYPE k3s_loadbalancer_server_connections gauge
k3s_loadbalancer_server_connections{name="k3s-etcd-server-load-balancer",server="172.17.0.4:2379"} 80
k3s_loadbalancer_server_connections{name="k3s-etcd-server-load-balancer",server="172.17.0.5:2379"} 0
k3s_loadbalancer_server_connections{name="k3s-etcd-server-load-balancer",server="172.17.0.6:2379"} 0
# HELP k3s_loadbalancer_server_health Current health value of loadbalancer server
# TYPE k3s_loadbalancer_server_health gauge
k3s_loadbalancer_server_health{name="k3s-etcd-server-load-balancer",server="172.17.0.4:2379"} 7
k3s_loadbalancer_server_health{name="k3s-etcd-server-load-balancer",server="172.17.0.5:2379"} 5
k3s_loadbalancer_server_health{name="k3s-etcd-server-load-balancer",server="172.17.0.6:2379"} 5

Types of Changes

tech debt; enhancement

Verification

Testing

Yes

Linked Issues

User-Facing Change

Further Comments

@brandond brandond force-pushed the loadbalancer-rewrite branch 3 times, most recently from 416f641 to f38a6cf Compare November 16, 2024 04:21
@brandond brandond marked this pull request as ready for review November 16, 2024 04:29
@brandond brandond requested a review from a team as a code owner November 16, 2024 04:29
Copy link

codecov bot commented Nov 16, 2024

Codecov Report

Attention: Patch coverage is 79.96516% with 115 lines in your changes missing coverage. Please review.

Project coverage is 42.99%. Comparing base (7296fa8) to head (129ac6d).
Report is 13 commits behind head on master.

Files with missing lines Patch % Lines
pkg/agent/loadbalancer/servers.go 84.53% 39 Missing and 4 partials ⚠️
pkg/etcd/etcdproxy.go 0.00% 19 Missing ⚠️
pkg/agent/loadbalancer/loadbalancer.go 74.50% 8 Missing and 5 partials ⚠️
pkg/agent/tunnel/tunnel.go 83.33% 10 Missing and 3 partials ⚠️
pkg/server/router.go 85.18% 5 Missing and 3 partials ⚠️
pkg/agent/loadbalancer/httpproxy.go 80.00% 5 Missing and 2 partials ⚠️
pkg/agent/config/config.go 63.63% 3 Missing and 1 partial ⚠️
pkg/util/client.go 66.66% 2 Missing and 1 partial ⚠️
pkg/agent/proxy/apiproxy.go 66.66% 1 Missing ⚠️
pkg/cli/token/token.go 0.00% 1 Missing ⚠️
... and 3 more
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #11329      +/-   ##
==========================================
- Coverage   43.78%   42.99%   -0.79%     
==========================================
  Files         162      181      +19     
  Lines       14415    18796    +4381     
==========================================
+ Hits         6311     8081    +1770     
- Misses       6827     9513    +2686     
+ Partials     1277     1202      -75     
Flag Coverage Δ
e2etests 35.24% <64.28%> (-8.54%) ⬇️
inttests 18.73% <20.73%> (?)
unittests 14.26% <52.26%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@brandond brandond force-pushed the loadbalancer-rewrite branch 3 times, most recently from a149b59 to 5443d8e Compare November 16, 2024 10:21
@brandond brandond changed the title [WIP] Rework loadbalancer server selection logic Rework loadbalancer server selection logic Nov 16, 2024
@brandond brandond force-pushed the loadbalancer-rewrite branch 13 times, most recently from 168f212 to a50f5d9 Compare November 19, 2024 11:24
pkg/agent/loadbalancer/servers.go Outdated Show resolved Hide resolved
pkg/agent/loadbalancer/loadbalancer.go Outdated Show resolved Hide resolved
@liyimeng
Copy link
Contributor

@brandond Thanks! With old implementation, we recently observe a panic when apiserver is offline. I hope this sort out all issues.

@liyimeng
Copy link
Contributor

I even doubt that this is the cause of #11346

@brandond brandond force-pushed the loadbalancer-rewrite branch from a50f5d9 to 74dde01 Compare November 20, 2024 17:32
@brandond
Copy link
Member Author

@liyimeng what you're describing sounds like #10317 which was fixed a while ago.

@brandond brandond requested a review from dereknola November 20, 2024 18:17
@brandond
Copy link
Member Author

brandond commented Nov 30, 2024

@liyimeng
In my setup, I tell the agent that server address is at https://kubernetes.svc, and I resolve kubernetes.svc to the service IP (10.43.0.1) by modifying my /etc/hosts file.

Don't do that. On a fresh startup, that address will be unreachable until after kube-proxy starts to add iptables rules to handle traffic for the Kubernetes service. But kube-proxy won't be able to start because the agent can't contact the apiserver at that address until after kube-proxy is running. So you'll be stuck.

Also this is all off topic. Please start a new issue for whatever you have going on here.

@liyimeng

This comment was marked as off-topic.

@liyimeng
Copy link
Contributor

liyimeng commented Dec 4, 2024

I even doubt that this is the cause of #11346

#11346 is a true issue, please re-open if it is not addressed in this PR.

None of these fields or functions are used in k3s or rke2

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
…rivate

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
…watch fails

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
The error message should be printf style, not just concatenated. The
current message is garbled if the command or result contains things that
look like formatting directives:

`Internal error occurred: error sending request: Post "https://10.10.10.102:10250/exec/default/volume-test/volume-test?command=sh&command=-c&command=echo+local-path-test+%!!(MISSING)E(MISSING)+%!!(MISSING)F(MISSING)data%!!(MISSING)F(MISSING)test&error=1&output=1": proxy error from 127.0.0.1:6443 while dialing 10.10.10.102:10250, code 502: 502 Bad Gateway`

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
@brandond brandond dismissed stale reviews from dereknola and manuelbuil via 129ac6d December 5, 2024 01:38
@brandond brandond force-pushed the loadbalancer-rewrite branch from fbdbcc5 to 129ac6d Compare December 5, 2024 01:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants