Skip to content

Commit

Permalink
Merge branch 'main' into website
Browse files Browse the repository at this point in the history
  • Loading branch information
cBournhonesque committed May 14, 2024
2 parents 29ab429 + 7da67eb commit 09a80ce
Show file tree
Hide file tree
Showing 176 changed files with 7,599 additions and 8,571 deletions.
18 changes: 18 additions & 0 deletions NOTES.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
- Add a `Controlled` component to an entity to specify that the player is controlling the entity
- field (`controlled_by: NetworkTarget`) on the server `Replicate`
- it means that the `Controlled` component gets replicated to the client who has control of this entity.
- then the client can filter on `Controlled` to add `Prediction` behaviour, for example. And add Interpolation on non-controlled entities?
- server also creates an entity for each connected client. Each client entity has a component indicating the
list of entities under control of the client. `HasControl(EntityHashSet)` (with EntityMapping)
- if the player disconnects, we can despawn automatically all entities under their control. (this behavior can be made configurable in the future)
- Would `Controlled` be synced to the Predicted entity? Maybe? if we predict other players, it would be nice to know
which Predicted entity is under our control.
- client->server replication:

- PROS:
- on the server, users can easily find the list of entities under control of a specific client
- on the client, users that receive an entity from the server can quickly check if they have control of it, without
having to compare client_ids.



- Transferring ownership to another client.
- commands.transfer_ownership(entity, new_owner)
- sends a message AuthorityTransfer to the new client who should replicate the entity
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ app.add_plugins(InputPlugin::<Inputs>::default());
// components
app.register_component::<PlayerId>(ChannelDirection::ServerToClient)
.add_prediction::<PlayerId>(ComponentSyncMode::Once)
.add_interpolation::<PlayerId>(ComponentSyncMode::Once);
.add_prediction(ComponentSyncMode::Once)
.add_interpolation(ComponentSyncMode::Once);
// channels
app.add_channel::<Channel1>(ChannelSettings {
Expand Down
3 changes: 2 additions & 1 deletion benches/message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,10 @@ use divan::{AllocProfiler, Bencher};
use lightyear::client::sync::SyncConfig;
use lightyear::prelude::client::{InterpolationConfig, PredictionConfig};
use lightyear::prelude::{client, server, MessageRegistry, Tick, TickManager};
use lightyear::prelude::{ClientId, NetworkTarget, SharedConfig, TickConfig};
use lightyear::prelude::{ClientId, SharedConfig, TickConfig};
use lightyear::server::input::InputBuffers;
use lightyear::shared::replication::components::Replicate;
use lightyear::shared::replication::network_target::NetworkTarget;
use lightyear_benches::local_stepper::{LocalBevyStepper, Step as LocalStep};
use lightyear_benches::protocol::*;

Expand Down
25 changes: 4 additions & 21 deletions benches/spawn.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,10 @@ use lightyear::prelude::client::{
ClientConnection, InterpolationConfig, NetClient, PredictionConfig,
};
use lightyear::prelude::{client, server, MessageRegistry, Tick, TickManager};
use lightyear::prelude::{ClientId, NetworkTarget, SharedConfig, TickConfig};
use lightyear::prelude::{ClientId, SharedConfig, TickConfig};
use lightyear::server::input::InputBuffers;
use lightyear::shared::replication::components::Replicate;
use lightyear::shared::replication::network_target::NetworkTarget;
use lightyear_benches::local_stepper::{LocalBevyStepper, Step as LocalStep};
use lightyear_benches::protocol::*;

Expand Down Expand Up @@ -53,16 +54,7 @@ fn spawn_local(bencher: Bencher, n: usize) {
);
stepper.init();

let entities = vec![
(
Component1(0.0),
Replicate {
replication_target: NetworkTarget::All,
..default()
},
);
n
];
let entities = vec![(Component1(0.0), Replicate::default()); n];

stepper.server_app.world.spawn_batch(entities);
stepper
Expand Down Expand Up @@ -109,16 +101,7 @@ fn spawn_multi_clients(bencher: Bencher, n: usize) {
);
stepper.init();

let entities = vec![
(
Component1(0.0),
Replicate {
replication_target: NetworkTarget::All,
..default()
},
);
FIXED_NUM_ENTITIES
];
let entities = vec![(Component1(0.0), Replicate::default()); FIXED_NUM_ENTITIES];

stepper.server_app.world.spawn_batch(entities);
stepper
Expand Down
8 changes: 4 additions & 4 deletions benches/src/local_stepper.rs
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ impl LocalBevyStepper {
// channels to receive a message from/to server
let (from_server_send, from_server_recv) = crossbeam_channel::unbounded();
let (to_server_send, to_server_recv) = crossbeam_channel::unbounded();
let client_io = IoConfig::from_transport(TransportConfig::LocalChannel {
let client_io = client::IoConfig::from_transport(ClientTransport::LocalChannel {
recv: from_server_recv,
send: to_server_send,
});
Expand Down Expand Up @@ -111,7 +111,7 @@ impl LocalBevyStepper {
interpolation: interpolation_config.clone(),
..default()
};
client_app.add_plugins((ClientPlugin::new(config), ProtocolPlugin));
client_app.add_plugins((ClientPlugins::new(config), ProtocolPlugin));
// Initialize Real time (needed only for the first TimeSystem run)
client_app
.world
Expand All @@ -122,7 +122,7 @@ impl LocalBevyStepper {
}

// Setup server
let server_io = IoConfig::from_transport(TransportConfig::Channels {
let server_io = server::IoConfig::from_transport(ServerTransport::Channels {
channels: client_params,
});

Expand Down Expand Up @@ -151,7 +151,7 @@ impl LocalBevyStepper {
}],
..default()
};
server_app.add_plugins((ServerPlugin::new(config), ProtocolPlugin));
server_app.add_plugins((ServerPlugins::new(config), ProtocolPlugin));

// Initialize Real time (needed only for the first TimeSystem run)
server_app
Expand Down
15 changes: 8 additions & 7 deletions benches/src/protocol.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ use bevy::prelude::Component;
use bevy::utils::default;
use derive_more::{Add, Mul};
use lightyear::client::components::ComponentSyncMode;
use lightyear::client::prediction::plugin::add_prediction_systems;
use serde::{Deserialize, Serialize};
use std::ops::Mul;

Expand Down Expand Up @@ -55,13 +56,13 @@ impl Plugin for ProtocolPlugin {
// inputs
app.add_plugins(InputPlugin::<MyInput>::default());
// components
app.register_component::<Component1>(ChannelDirection::ServerToClient);
app.add_prediction::<Component1>(ComponentSyncMode::Full);
app.add_linear_interpolation_fn::<Component1>();
app.register_component::<Component2>(ChannelDirection::ServerToClient);
app.add_prediction::<Component2>(ComponentSyncMode::Simple);
app.register_component::<Component3>(ChannelDirection::ServerToClient);
app.add_prediction::<Component3>(ComponentSyncMode::Once);
app.register_component::<Component1>(ChannelDirection::ServerToClient)
.add_prediction(ComponentSyncMode::Full)
.add_linear_interpolation_fn();
app.register_component::<Component2>(ChannelDirection::ServerToClient)
.add_prediction(ComponentSyncMode::Simple);
app.register_component::<Component3>(ChannelDirection::ServerToClient)
.add_prediction(ComponentSyncMode::Once);
// channels
app.add_channel::<Channel1>(ChannelSettings {
mode: ChannelMode::OrderedReliable(ReliableSettings::default()),
Expand Down
18 changes: 18 additions & 0 deletions book/src/concepts/advanced_replication/client_replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,24 @@ There are different possibilities.

To replicate a client-entity to the server, it is exactly the same as for a server-entity.
Just add the `Replicate` component to the entity and it will be replicated to the server.
```rust
fn handle_connection(
mut connection_event: EventReader<ConnectEvent>,
mut commands: Commands,
) {
for event in connection_event.read() {
let local_client_id = event.client_id();
commands.spawn((
/* your other components here */
Replicate {
replication_target: NetworkTarget::All,
interpolation_target: NetworkTarget::AllExcept(vec![local_client_id]),
..default()
},
));
}
}
```

Note that `prediction_target` and `interpolation_target` will be unused as the server doesn't do any
prediction or interpolation.
Expand Down
108 changes: 63 additions & 45 deletions book/src/concepts/advanced_replication/interest_management.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,66 +12,84 @@ There are two main advantages:
For example, in a RTS, you can avoid replicating units that are in fog-of-war.



## Implementation

In lightyear, interest management is implemented with the concept of `Rooms`.
### VisibilityMode

An entity can join one or more rooms, and clients can similarly join one or more rooms.
The first step is to think about the `VisibilityMode` of your entities. It is defined on the `Replicate` component.

We then compute which entities should be replicated to which clients by looking at which rooms they are both in.

To summarize:
- if a client is in a room but the entity is not (or vice-versa), we will not replicate that entity to that client
- if the client and entity are both in the same room, we will replicate that entity to that client
- if a client leaves a room that the entity is in (or an entity leaves a room that the client is in), we will despawn that entity for that client
- if a client joins a room that the entity is in (or an entity joins a room that the client is in), we will spawn that entity for that client
```rust,noplayground
#[derive(Default)]
pub enum VisibilityMode {
/// We will replicate this entity to all clients that are present in the [`NetworkTarget`] AND use visibility on top of that
InterestManagement,
/// We will replicate this entity to all clients that are present in the [`NetworkTarget`]
#[default]
All
}
```

If `VisibilityMode::All`, you have a coarse way of doing interest management, which is to use the `replication_target` to
specify which clients will receive client updates. The `replication_target` is a `NetworkTarget` which is a list of clients
that we should replicate to.

Since it can be annoying to have always add your entities to the correct rooms, especially if you want to just replicate them to everyone.
We introduce several concepts to make this more convenient.
In some cases, you might want to use `VisibilityMode::InterestManagement`, which is a more fine-grained way of doing interest management.
This adds additional constraints on top of the `replication_target`, we will **never** send updates for a client that is not in the
`replication_target` of your entity.

#### NetworkTarget

```rust,noplayground
/// NetworkTarget indicated which clients should receive some message
pub enum NetworkTarget {
#[default]
/// Message sent to no client
None,
/// Message sent to all clients except for one
AllExcept(ClientId),
/// Message sent to all clients
All,
/// Message sent to only one client
Only(ClientId),
}
```
### Interest management

NetworkTarget is used to indicate very roughly to which clients a given entity should be replicated.
Note that this is in addition of rooms.
If you set `VisibilityMode::InterestManagement`, we will add a `ReplicateVisibility` component to your entity,
which is a cached list of clients that should receive replication updates about this entity.

Even if an entity and a client are in the same room, the entity will not be replicated to the client if the NetworkTarget forbids it (for instance, it is not `All` or `Only(client_id)`)
There are several ways to update the visibility of an entity:
- you can either update the visibility directly with the `VisibilityManager` resource
- we also provide a more static way of updating the visibility with the concept of `Rooms` and the `RoomManager` resource.

However, if a `NetworkTarget` is `All`, that doesn't necessarily mean that the entity will be replicated to all clients; they still need to be in the same rooms.
There is a setting to change this behaviour, the `ReplicationMode`.
#### Immediate visibility update

You can simply directly update the visibility of an entity/client pair with the `VisibilityManager` resource.

#### ReplicationMode
```rust
use bevy::prelude::*;
use lightyear::prelude::*;
use lightyear::prelude::server::*;

We also introduce:
```rust,noplayground
#[derive(Default)]
pub enum ReplicationMode {
/// Use rooms for replication
Room,
/// We will replicate this entity to clients using only the [`NetworkTarget`], without caring about rooms
#[default]
NetworkTarget
fn my_system(
mut visibility_manager: ResMut<VisibilityManager>,
) {
// you can update the visibility like so
visibility_manager.gain_visibility(ClientId::Netcode(1), Entity::PLACEHOLDER);
visibility_manager.lose_visibility(ClientId::Netcode(2), Entity::PLACEHOLDER);
}
```

If the `ReplicationMode` is `Room`, then the `NetworkTarget` is a prerequisite for replication, but not sufficient.
i.e. the entity will be replicated if they are in the same room AND if the `NetworkTarget` allows it.
#### Rooms

An entity can join one or more rooms, and clients can similarly join one or more rooms.

We then compute which entities should be replicated to which clients by looking at which rooms they are both in.

To summarize:
- if a client is in a room but the entity is not (or vice-versa), we will not replicate that entity to that client
- if the client and entity are both in the same room, we will replicate that entity to that client
- if a client leaves a room that the entity is in (or an entity leaves a room that the client is in), we will despawn that entity for that client
- if a client joins a room that the entity is in (or an entity joins a room that the client is in), we will spawn that entity for that client

This can be useful for games where you have physical instances of rooms:
- a RPG where you can have different rooms (tavern, cave, city, etc.)
- a server could have multiple lobbies, and each lobby is in its own room
- a map could be divided into a grid of 2D squares, where each square is its own room

If the `ReplicationMode` is `NetworkTarget`, then we will only use the value of `replicate.replication_target` without checking rooms at all.
```rust
use bevy::prelude::*;
use lightyear::prelude::*;
use lightyear::prelude::server::*;

fn room_system(mut manager: ResMut<RoomManager>) {
// the entity will now be visible to the client
manager.add_client(ClientId::Netcode(0), RoomId(0));
manager.add_entity(Entity::PLACEHOLDER, RoomId(0));
}
```
31 changes: 17 additions & 14 deletions book/src/concepts/advanced_replication/replication_logic.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ Those two are handled differently by the replication system.

There are certain invariants/guarantees that we wish to maintain with replication.

**Rule #1a**: we would like a replicated entity to be in a consistent state compared to what it was on the server: at no point do we want a situation where
a given component is on tick T1 but another component of the same entity is on tick T2. The replicated entity should be equal to a version of the remote entity in the past.
Similarly, we would not want one component of an entity to be inserted later than other components. This could be disastrous because some other system could depend on both
components being present together!

Rule #1: we would like a replicated entity to be in a consistent state compared to what it was on the server: at no point do we want a situation where
a given component is on tick T1 but another component of the same entity is on tick T2. Similarly, we would not want one component of an entity to be inserted
later than other components. This could be disastrous because some other system could depend on both components being present together!

Rule #2: we want to be able to extend this guarantee to multiple entities.
**Rule #2**: we want to be able to extend this guarantee to multiple entities.
I will give two relevant examples:
- client prediction: for client-prediction, we want to rollback if a receives server-state doesn't match with the predicted history.
If we are running client-prediction for multiple entities that are not in the same tick, we could have situations where we need to rollback one entity starting from tick T1
Expand All @@ -41,25 +41,29 @@ will be equivalent to the state of the group on the server at a given previous t

## Entity Actions

For each [`ReplicationGroup`](crate::prelude::ReplicationGroup), Entity Actions are replicated in an `OrderedReliable` manner:
- we apply each action message *in order*
For each [`ReplicationGroup`](crate::prelude::ReplicationGroup), Entity Actions are replicated in an `OrderedReliable` manner.

### Send

Whenever there are any actions for a given [`ReplicationGroup`](crate::prelude::ReplicationGroup), we send them as a single message AND we include any updates for this group as well.
This is to guarantee consistency; if we sent them as 2 separate messages, the packet containing the updates could get lost and we would be in an inconsistent state.
Each message for a given [`ReplicationGroup`] is associated with a message id (a monotonically increasing number) that is used to order the messages on the client.

### Receive

On the receive side, we buffer the EntityActions that we receive, so that we can read them in order (message id 1, 2, 3, 4, etc.)
We keep track of the next message id that we should receive.


## Entity Updates

### Send

We gather all updates since the most recent of:
- last time we sent some EntityActions for the Replication Group
- last time we got an ACK from the client that the EntityUpdates was received
We gather all updates since the last time we got an ACK from the client that the EntityUpdates was received

The reason for this is:
- we could be gathering all the component changes since the last time we sent EntityActions, but then it could be wasteful if the last time we had any entity actions was a long time ago
and many components got updated since
- we could be gathering all the component changes since the last time we sent EntityActions, but then it could be wasteful
if the last time we had any entity actions was a long time ago and many components got updated since.
- we could be gathering all the component changes since the last time we sent a message, but then we could have a situation where:
- we send changes for C1 on tick 1
- we send changes for C2 on tick 2
Expand All @@ -68,11 +72,10 @@ The reason for this is:

### Receive


For each [`ReplicationGroup`](crate::prelude::ReplicationGroup), Entity Updates are replicated in a `SequencedUnreliable` manner.
We have some additional constraints:
- we only apply EntityUpdates if we have already applied all the EntityActions for the given [`ReplicationGroup`](crate::prelude::ReplicationGroup) that were sent when the Updates were sent.
- for example we send A1, U2, A3, U4; we receive U4 first, but we only apply it if we have applied A3, as those are the latest EntityActions sent when U4 was sent
- if we received a more rencet updates that can be applied, we discard the older one (Sequencing)
- if we received a more recent update that can be applied, we discard the older one (Sequencing)
- for example if we send A1, U2, U3 and we receive A1 then U3, we discard U2 because it is older than U3

Loading

0 comments on commit 09a80ce

Please sign in to comment.