A home for experiments for muddle.run.
muddle-run-10-07-2021.mp4
Currently, the project represents the very bare-bones of the game I'm trying to build: a multiplayer runner with a collaborative level editor, which will also allow players to test levels while they are being designed.
- WASD movement
- Rapier physics
- Netcode (poorly executed one, but inspired by Overwatch's GDC presentation)
- Interpolation
- Rewinding game state
- Real-time adjustment of client simulation speed
- Initial version of Builder mode (collaborative level editor)
This application can be run in either UPD, or WebRTC mode. The workspace contains the following binary projects:
mr_dekstop_client
mr_web_client
mr_server
mr_matchmaker
mr_persistence
# Running the server
# (Note that 127.0.0.1 might not work for Windows, you can use your local network ip instead, like 192.168.x.x)
# (See https://github.com/naia-rs/naia-socket/issues/24)
MUDDLE_PUBLIC_IP_ADDR=127.0.0.1 MUDDLE_LISTEN_PORT=3455 cargo run -p mr_server
# Running the client
cargo run -p mr_desktop_client
# Running the server
# (Note that 127.0.0.1 might not work for Firefox, you can use your local network ip instead, like 192.168.x.x)
MUDDLE_PUBLIC_IP_ADDR=127.0.0.1 MUDDLE_LISTEN_PORT=3455 cargo run -p mr_server
# Running the client
cd bins/web_client
wasm-pack build --target web
basic-http-server . # or any other tool that can serve static files
- PostgreSQL database
- sqlx-cli
- Auth0 and Google OAuth 2.0 client credentials (see the environment variables section)
DATABASE_URL=postgres://postgres@localhost/mr_persistence_development sqlx database setup
Environment variables are read when both compiling the binaries and running them (except for the web client). The environment variables that are read during the run-time take higher priority.
Dotenv files are also supported. They are recursively searched down to the root starting from a working directory. The binaries search for the following files:
- .env
- .env.$MUDDLE_ENV
- .env.$CARGO_PKG_NAME
- .env.$CARGO_PKG_NAME.$MUDDLE_ENV
Every found file is loaded, and all read variables are merged. Priority is defined as in the list above, where the most specific dotenv file overrides the vars from the other ones.
MUDDLE_ENV
equals production
when compiling with the release profile
and development
when compiling with the debug one. It can be overridden
with an environment variable.
MUDDLE_PUBLIC_IP_ADDR
(mandatory if outside Agones cluster)- It can't equal to
0.0.0.0
, use127.0.0.1
if you want to connect to localhost, for instance. - Also, note that
127.0.0.1
might not work for Firefox, you can use your local network instead, like192.168.x.x
.
- It can't equal to
MUDDLE_LISTEN_IP_ADDR
(defaults to0.0.0.0
)MUDDLE_LISTEN_PORT
(mandatory if outside Agones cluster)MUDDLE_IDLE_TIMEOUT
(defaults to 300)- Specifies the time in milliseconds after which a server will be closed if there are no connected players.
MUDDLE_PUBLIC_PERSISTENCE_URL
(optional)MUDDLE_PRIVATE_PERSISTENCE_URL
(optional)MUDDLE_GOOGLE_WEB_CLIENT_ID
(mandatory if persistence urls are set)MUDDLE_GOOGLE_DESKTOP_CLIENT_ID
(mandatory if persistence urls are set)MUDDLE_AUTH0_CLIENT_ID
(mandatory if persistence urls are set)
MUDDLE_SERVER_IP_ADDR
(defaults to127.0.0.1
)MUDDLE_SERVER_PORT
(defaults to3455
)MUDDLE_MATCHMAKER_URL
(optional)- If this variable is passed,
MUDDLE_SERVER_IP_ADDR
andMUDDLE_SERVER_PORT
are still read, but the server is displayed in the list only ifMUDDLE_SERVER_IP_ADDR
is passed explicitly (the default value is ignored).
- If this variable is passed,
MUDDLE_PUBLIC_PERSISTENCE_URL
(mandatory ifMUDDLE_MATCHMAKER_URL
is set)MUDDLE_GOOGLE_CLIENT_ID
(mandatory ifMUDDLE_MATCHMAKER_URL
is set)MUDDLE_GOOGLE_CLIENT_SECRET
(mandatory for the desktop client ifMUDDLE_MATCHMAKER_URL
is set)- Google only considers Desktop clients as public ones, also requiring client secrets in the requests. Don't set the secret for the web clients.
MUDDLE_AUTH0_CLIENT_ID
(mandatory ifMUDDLE_MATCHMAKER_URL
is set)
SIMULATIONS_PER_SECOND
(defaults to120
, compile-time only)- Is expected to work with the following values:
30
,60
,120
. You may want to set a lower value than the default one if your device can't handle 120 simulations per second. - Note that both the server and the client must be compiled with the same value.
- Is expected to work with the following values:
MUDDLE_GOOGLE_WEB_CLIENT_ID
(mandatory)MUDDLE_GOOGLE_DESKTOP_CLIENT_ID
(mandatory)MUDDLE_AUTH0_CLIENT_ID
(mandatory)
docker build -t mvlabat/mr_matchmaker -f mr_matchmaker.dockerfile . --platform linux/amd64
docker build -t mvlabat/mr_web_client --build-arg muddle_matchmaker_ip_addr=<IP> --build-arg muddle_matchmaker_port=<PORT> -f mr_web_client.dockerfile . --platform linux/amd64
docker build -t mvlabat/mr_server -f mr_server.dockerfile . --platform linux/amd64
- aws-cli (tested with 2.9.12)
- Make sure to configure AWS CLI:
aws configure
- Make sure to configure AWS CLI:
- kubectl (v1.23.0)
- helm (tested with v3.10.3)
terraform apply -target=module.eks_cluster
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
terraform apply -target=module.helm_agones
- Resources declared with
kubernetes_manifest
fail to plan without this helm release installed first
- Resources declared with
terraform apply
terraform destroy -target=module.agones
helm delete agones -n agones-system && terraform destroy -target=module.helm_agones
terraform destroy -target=module.eks_cluster
kubectl set image deployment <DEPLOYMENT_NAME> <CONTAINER_NAME>=<TAG>
For example:
kubectl set image deployment mr-matchmaker mr-matchmaker=mvlabat/mr_matchmaker
- You can also alternate appending and removing
:latest
tag suffix, to trick kubernetes into redeploying (otherwise it might think that the image is the same and won't pull its updated version).
- You can also alternate appending and removing
-
Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
To fix this error, run
export KUBE_CONFIG_PATH=~/.kube/config
(or add it to your shell rc).- Also, make sure you ran
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
before.
- Also, make sure you ran