Skip to content

Commit

Permalink
Merge branch 'main' into website
Browse files Browse the repository at this point in the history
  • Loading branch information
cBournhonesque committed Aug 17, 2024
2 parents 847dcb1 + 841a7c1 commit 73759dc
Show file tree
Hide file tree
Showing 50 changed files with 3,114 additions and 1,187 deletions.
622 changes: 83 additions & 539 deletions NOTES.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
- [Server](./concepts/bevy_integration/server.md)
- [Events](./concepts/bevy_integration/events.md)
- [Advanced Replication](./concepts/advanced_replication/title.md)
- [Authority](./concepts/advanced_replication/authority.md)
- [Bandwidth Management](./concepts/advanced_replication/bandwidth_management.md)
- [Replication Logic](./concepts/advanced_replication/replication_logic.md)
- [Inputs](./concepts/advanced_replication/inputs.md)
Expand Down
96 changes: 96 additions & 0 deletions book/src/concepts/advanced_replication/authority.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Authority

Networked entities can be simulated on a client or on a server.
We define by 'Authority' the decision of which **peer is simulating an entity**.
The authoritative peer (client or server) is the only one that is allowed to send replication updates for an entity, and it won't accept updates from a non-authoritative peer.

Only **one peer** can be the authority over an entity at a given time.


### Benefits of distributed client-authority

Client authority means that the client is directly responsible for simulating an entity and sending
replication updates for that entity.

Cons:
- high exposure to cheating.
- lower latency
Pros:
- less CPU load on the server since the client is simulating some entities


### How it works

We have 2 components:
- `HasAuthority`: this is a marker component that you can use as a filter in queries
to check if the current peer has authority over the entity.
- on clients:
- a client will not accept any replication updates from the server if it has `HasAuthority` for an entity
- a client will send replication updates for an entity only if it has `HasAuthority` for that entity
- on server:
- this component is just used as an indicator for convenience, but the server can still send replication
updates even if it doesn't have `HasAuthority` for an entity. (because it's broadcasting the updates coming
from a client)
- `AuthorityPeer`: this component is only present on the server, and it indicates to the server which
peer currently holds authority over an entity. (`None`, `Server` or a `Client`).
The server will only accept replication updates for an entity if the sender matches the `AuthorityPeer`.

### Authority Transfer

On the server, you can use the `EntityCommand` `transfer_authority` to transfer the authority for an entity to a different peer.
The command is simply `commands.entity(entity).transfer_authority(new_owner)` to transfer the authority of `entity` to the `AuthorityPeer` `new_owner`.

Under the hood, authority transfers do two things:
- on the server, the transfer is applied immediately (i.e. the `HasAuthority` and `AuthorityPeer` components are updated instantly)
- than the server sends messages to clients to notify them of an authority change. Upon receiving the message, the client will add or remove the `HasAuthority` component as needed.

### Implementation details

- There could be a time where both the client and server have authority at the same time
- server is transferring authority from itself to a client: there is a period of time where
no peer has authority, which is ok.
- server is transferring authority from a client to itself: there is a period of time where
both the client and server have authority. The client's updates won't be accepted by the server because it has authority, and the server's updates won't be accepted by the client because it
has authority, so no updates will be applied.

- server is transferring authority from client C1 to client C2:
- if C1 receives the message first, then for a short period of time no client has authority, which is ok
- if C2 receives the message first, then for a short period of time both clients have authority. However the `AuthorityPeer` is immediately updated on the server, so the server will only
accept updates from C2, and will discard the updates from C1.

- We have to be careful on the server about how updates are re-broadcasted to other clients.
If a client 1 has authority and the server broadcasts the updates to all entities, we keep the `ReplicationTarget` as `NetworkTarget::All` (it would be tedious to keep track of how the replication target needs to be updated as we change authority again), but instead **the server never sends updates to the client that has authority.**

- One thing that we have to be careful about is that lightyear used to only apply entity mapping on the receiver side. The reason is that the receiver receives a 'Spawn' message with the remote entity id so it knows how to map from the local to the remote id. In this case, the authority can now be transferred to the receiver. The receiver will now send replication updates, but the peer who was originally the spawner of the entity doesn't have an entity mapping. This means that the new sender (who was originally the receiver) must do the entity mapping on the send side.
- the Entity in EntityUpdates or EntityActions can now be mapped by the sender, if there is a mapping detected in `local_to_remote` entity map
- the entity mappers used on the send side and the receiver side are not the same anymore. To avoid possible conflicts, on the send side we flip a bit to indicate that we did a local->remote mapping so that the receiver doesn't potentially reapply a remote->local mapping. The send entity_map flips the bit, and the remote entity_map checks the bit.
- since we are now potentially doing entity mapping on the send side, we cannot just replicate a component `&C` because we might have to update the component to do entity mapping. Therefore if the component implements `MapEntities`, we clone it first and then apply entity mapping.
- TODO: this is potentially inefficient because it should be quite rare that the sender needs to do entity mapping (it's only if the authority over an entity was transferred). However components that contain other entities should change pretty infrequently so this clone should be ok. Still, it would be nice if we could avoid it

- We want the `Interpolated` entity to still get updated even if the client has authority over the `Confirmed` entity. To do this, we populate the `ConfirmedHistory` with the server's updates when we don't have authority, and with the client's `Confirmed` updates if we have authority. This makes sense because `Interpolated` should just interpolate between ground truth states.


TODO:
- what to do with prepredicted?
- client spawns an entity with PrePredicted
- server receives it, adds Replicate
- currently: server replicates a spawn, which will become the Confirmed entity on the client.
- if the Spawn has entity mapping, then we're screwed! (because it maps to the client entity)
- if the Spawn has no entity mapping, but the Components don't, we're screwed (it will be interpreted as 2 different actions)
- sol 1: use the local entity for bookkeeping and apply entity mapping at the end for the entire action. If the action has a spawn, no mapping. (because it's a new entity)
- sol 2: we change how PrePredicted works. It spawns a Confirmed AND a Predicted on client; and replicates the Confirmed. Then the server transfers authority to the client upon receipt.
- test with conflict (both client and server spawn entity E and replicate it to the remote)


TODO:
- maybe let the client always accept updates from the server, even if the client has `HasAuthority`? What is the goal of disallowing the client to accept updates from the server if it has
`HasAuthority`?
- maybe include a timestamp/tick to the `ChangeAuthority` messages so that any in-flight replication updates can be handled correctly?
- authority changes from C1 to C2 on tick 7. All updates from C1 that are previous to tick 7 are accepted by the server. Updates after that are discarded. We receive updates from C2 as soon as it receives the `ChangeAuthority` message.
- authority changes from C1 to S on tick 7. All updates from C1 that are previous to tick 7 are accepted by the server.
- how do we deal with `Predicted`?
- if Confirmed has authority, we probably want to disable rollback and set the predicted state to be equal to the confirmed state?
- ideally, if an entity is client-authoritative, then it should interact with 0 delay with the client predicted entities. But currently only the Confirmed entity would get the Authority. Would
we also want to sync the HasAuthority component so that it gets added to Predicted?
- maybe have an API `request_authority` where the client requests the authority? and receives a response from the server telling it if the request is accepted or not?
Look at this page: https://docs-multiplayer.unity3d.com/netcode/current/basics/ownership/
10 changes: 7 additions & 3 deletions examples/common/src/app.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ use lightyear::transport::LOCAL_SOCKET;
use serde::{Deserialize, Serialize};

use crate::settings::*;
use crate::shared::{shared_config, SERVER_REPLICATION_INTERVAL};
use crate::shared::{shared_config, REPLICATION_INTERVAL};

/// CLI options to create an [`App`]
#[derive(Parser, PartialEq, Debug)]
Expand Down Expand Up @@ -361,6 +361,10 @@ fn client_app(settings: Settings, net_config: client::NetConfig) -> (App, Client
let client_config = ClientConfig {
shared: shared_config(Mode::Separate),
net: net_config,
replication: ReplicationConfig {
send_interval: REPLICATION_INTERVAL,
..default()
},
..default()
};
(app, client_config)
Expand Down Expand Up @@ -398,7 +402,7 @@ fn server_app(
shared: shared_config(Mode::Separate),
net: net_configs,
replication: ReplicationConfig {
send_interval: SERVER_REPLICATION_INTERVAL,
send_interval: REPLICATION_INTERVAL,
..default()
},
..default()
Expand Down Expand Up @@ -433,7 +437,7 @@ fn combined_app(
shared: shared_config(Mode::HostServer),
net: net_configs,
replication: ReplicationConfig {
send_interval: SERVER_REPLICATION_INTERVAL,
send_interval: REPLICATION_INTERVAL,
..default()
},
..default()
Expand Down
4 changes: 2 additions & 2 deletions examples/common/src/shared.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@ use std::time::Duration;

pub const FIXED_TIMESTEP_HZ: f64 = 64.0;

pub const SERVER_REPLICATION_INTERVAL: Duration = Duration::from_millis(100);
pub const REPLICATION_INTERVAL: Duration = Duration::from_millis(100);

/// The [`SharedConfig`] must be shared between the `ClientConfig` and `ServerConfig`
pub fn shared_config(mode: Mode) -> SharedConfig {
SharedConfig {
// send replication updates every 100ms
server_replication_send_interval: SERVER_REPLICATION_INTERVAL,
server_replication_send_interval: REPLICATION_INTERVAL,
tick: TickConfig {
tick_duration: Duration::from_secs_f64(1.0 / FIXED_TIMESTEP_HZ),
},
Expand Down
15 changes: 15 additions & 0 deletions examples/distributed_authority/.cargo/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[build]
# it is a good idea to specify your target here, to avoid losing incremental compilation when compiling to wasm
# target = "aarch64-apple-darwin"
rustflags = ["--cfg", "web_sys_unstable_apis"]

[target.wasm32-unknown-unknown]
runner = "wasm-server-runner"

# Enable max optimizations for dependencies, but not for our code:
[profile.dev.package."*"]
opt-level = 3

# Enable only a small amount of optimization in debug mode
[profile.dev]
opt-level = 1
37 changes: 37 additions & 0 deletions examples/distributed_authority/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
[package]
name = "distributed_authority"
version = "0.1.0"
authors = ["Charles Bournhonesque <charlesbour@gmail.com>"]
edition = "2021"
rust-version = "1.65"
description = "Examples for the lightyearServer-client networking library for the Bevy game engine"
readme = "README.md"
repository = "https://github.com/cBournhonesque/lightyear"
keywords = ["bevy", "multiplayer", "networking", "netcode", "gamedev"]
categories = ["game-development", "network-programming"]
license = "MIT OR Apache-2.0"
publish = false

[features]
metrics = ["lightyear/metrics", "dep:metrics-exporter-prometheus"]

[dependencies]
lightyear_examples_common = { path = "../common" }
lightyear = { path = "../../lightyear", features = [
"steam",
"webtransport",
"websocket",
] }
serde = { version = "1.0", features = ["derive"] }
anyhow = { version = "1.0", features = [] }
tracing = "0.1"
tracing-subscriber = "0.3.17"
bevy = { version = "0.14", features = [
"multi_threaded",
"bevy_state",
"serialize",
] }
bevy_mod_picking = { version = "0.20", features = ["backend_bevy_ui"] }
rand = "0.8"
metrics-exporter-prometheus = { version = "0.15.1", optional = true }
bevy-inspector-egui = "0.25"
45 changes: 45 additions & 0 deletions examples/distributed_authority/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Distributed authority

This example showcases how to transfer authority over an entity to the server or to a client.
This can be useful if you're going for a 'thin server' approach where clients are simulating most of the world.

In this example, the ball is initially simulated on the server.
When a client gets close the ball, the server transfers the authority over the ball to the client.
This means that the client is now simulating the ball and sending replication updates to the server.


## Running the example

There are different 'modes' of operation:

- as a dedicated server with `cargo run -- server`
- as a listen server with `cargo run -- listen-server`. This will launch 2 independent bevy apps (client and server) in
separate threads.
They will communicate via channels (so with almost 0 latency)
- as a listen server with `cargo run -- host-server`. This will launch a single bevy app, where the server will also act
as a client. Functionally, it is similar to the "listen-server" mode, but you have a single bevy `World` instead of
separate client and server `Worlds`s.

Then you can launch clients with the commands:

- `cargo run -- client -c 1` (`-c 1` overrides the client id, to use client id 1)
- `cargo run -- client -c 2`

You can modify the file `assets/settings.ron` to modify some networking settings.


### Testing in wasm with webtransport

NOTE: I am using [trunk](https://trunkrs.dev/) to build and serve the wasm example.

To test the example in wasm, you can run the following commands: `trunk serve`

You will need a valid SSL certificate to test the example in wasm using webtransport. You will need to run the following
commands:

- `sh examples/generate.sh` (to generate the temporary SSL certificates, they are only valid for 2 weeks)
- `cargo run -- server` to start the server. The server will print out the certificate digest (something
like `1fd28860bd2010067cee636a64bcbb492142295b297fd8c480e604b70ce4d644`)
- You then have to replace the certificate digest in the `assets/settings.ron` file with the one that the server printed
out.
- then start the client wasm test with `trunk serve`
58 changes: 58 additions & 0 deletions examples/distributed_authority/assets/settings.ron
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
Settings(
client: ClientSettings(
inspector: true,
client_id: 0,
client_port: 0, // the OS will assign a random open port
server_addr: "127.0.0.1",
conditioner: Some(Conditioner(
latency_ms: 50,
jitter_ms: 5,
packet_loss: 0.02
)),
server_port: 5000,
transport: WebTransport(
// this is only needed for wasm, the self-signed certificates are only valid for 2 weeks
// the server will print the certificate digest on startup
certificate_digest: "6e:f2:d6:57:f8:f7:c9:ab:88:ae:59:6b:e8:97:cc:1e:a7:a4:ce:71:17:e1:39:79:4d:c6:2b:79:86:9a:c5:fc",
),
// server_port: 5001,
// transport: Udp,
// server_port: 5002,
// transport: WebSocket,
// server_port: 5003,
// transport: Steam(
// app_id: 480,
// )
),
server: ServerSettings(
headless: false,
inspector: true,
conditioner: Some(Conditioner(
latency_ms: 50,
jitter_ms: 5,
packet_loss: 0.02
)),
transport: [
WebTransport(
local_port: 5000
),
Udp(
local_port: 5001
),
WebSocket(
local_port: 5002
),
// Steam(
// app_id: 480,
// server_ip: "0.0.0.0",
// game_port: 5003,
// query_port: 27016,
// ),
],
),
shared: SharedSettings(
protocol_id: 0,
private_key: (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
compression: None,
)
)
9 changes: 9 additions & 0 deletions examples/distributed_authority/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<title>Bevy game</title>
<link data-trunk rel="rust"/>
</head>
</html>
Loading

0 comments on commit 73759dc

Please sign in to comment.