Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make replication storage limit considerations more clear. #1309

Merged
merged 3 commits into from
Dec 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion configuration/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -902,7 +902,7 @@ Description: Sets the storage location for data replicated from other Gravwell i
### **Max-Replicated-Data-GB**
Default Value:
Example: `Max-Replicated-Data-GB=100`
Description: Sets, in gigabytes, the maximum amount of replicated data to store. When this is exceeded, the indexer will begin walking the replicated data to clean up; it will first remove any shards which have been deleted on the original indexer, then it will begin deleting the oldest shards. Once the storage size is below the limit, deletion will stop.
Description: Sets, in gigabytes, the maximum amount of replicated data to store. When this is exceeded, the indexer will begin walking the replicated data to clean up; it will first remove any shards which have been deleted on the original indexer, then cold shards, then by oldest date. Once the storage size is below the limit, deletion will stop.

### **Replication-Secret-Override**
Default Value:
Expand Down
2 changes: 1 addition & 1 deletion configuration/replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The replication system is logically separated into "Clients" and "Peers", with e

Replication connections are encrypted by default and require that indexers have functioning X509 certificates. If the certificates are not signed by a valid certificate authority (CA) then `Insecure-Skip-TLS-Verify=true` must be added to the Replication configuration section.

Replication storage nodes (nodes which receive replicated data) are allotted a specific amount of storage and will not delete data until that storage is exhausted. If a remote client node deletes data as part of normal ageout, the data shard is marked as deleted and prioritized for deletion when the replication node hits its storage limit. The replication system prioritizes deleted shards first, cold shards second, and oldest shards last. All replicated data is compressed; if a cold storage location is provided it is usually recommended that the replication storage location have the same storage capacity as the cold and hot storage combined.
Replication storage nodes (nodes which receive replicated data) are allotted a specific amount of storage and will not delete data unless the `Max-Replicated-Data-GB` parameter is set. Even with `Max-Replicated-Data-GB` set, the replication system will not delete replicated shards until the storage limit has been reached. If a remote client node deletes data as part of normal ageout, the data shard is marked as deleted and prioritized for deletion when the replication node hits its storage limit. The replication system prioritizes deleted shards first, cold shards second, and oldest shards last. All replicated data is compressed; if a cold storage location is provided it is usually recommended that the replication storage location have the same storage capacity as the cold and hot storage combined.

```{note}
By default, the replication engine uses port 9406.
Expand Down
Loading