Skip to content

Commit

Permalink
add trigger kind "time" in rolling_file (estk#296)
Browse files Browse the repository at this point in the history
* feat(triger): Add "time" triger

* feat(trigger): use pre_process for time trigger

* test(time): add test case of time trigger.

* feat(triger): Add "time" triger like log4j

* Update docs/Configuration.md

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* Update src/append/rolling_file/policy/compound/trigger/time.rs

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* Update src/append/rolling_file/policy/compound/trigger/time.rs

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* Update src/append/rolling_file/policy/compound/trigger/time.rs

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* Update src/append/rolling_file/policy/compound/trigger/time.rs

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* fix nix

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

* Update Configuration.md and time.rs

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>

---------

Co-authored-by: Bryan Conn <30739012+bconn98@users.noreply.github.com>
  • Loading branch information
Dirreke and bconn98 authored Feb 2, 2024
1 parent a898a07 commit 6c6ace0
Show file tree
Hide file tree
Showing 12 changed files with 663 additions and 34 deletions.
4 changes: 4 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ compound_policy = []
delete_roller = []
fixed_window_roller = []
size_trigger = []
time_trigger = ["rand"]
json_encoder = ["serde", "serde_json", "chrono", "log-mdc", "log/serde", "thread-id"]
pattern_encoder = ["chrono", "log-mdc", "thread-id"]
ansi_writer = []
Expand All @@ -41,6 +42,7 @@ all_components = [
"delete_roller",
"fixed_window_roller",
"size_trigger",
"time_trigger",
"json_encoder",
"pattern_encoder",
"threshold_filter"
Expand Down Expand Up @@ -68,6 +70,7 @@ serde_json = { version = "1.0", optional = true }
serde_yaml = { version = "0.9", optional = true }
toml = { version = "0.8", optional = true }
parking_lot = { version = "0.12.0", optional = true }
rand = { version = "0.8", optional = true}
thiserror = "1.0.15"
anyhow = "1.0.28"
derivative = "2.2"
Expand All @@ -84,6 +87,7 @@ lazy_static = "1.4"
streaming-stats = "0.2.3"
humantime = "2.1"
tempfile = "3.8"
mock_instant = "0.3"

[[example]]
name = "json_logger"
Expand Down
60 changes: 54 additions & 6 deletions docs/Configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,13 +168,15 @@ my_rolling_appender:
pattern: "logs/test.{}.log"
```

The new component is the _policy_ field. A policy must have `kind` like most
The new component is the _policy_ field. A policy must have the _kind_ field like most
other components, the default (and only supported) policy is `kind: compound`.

The _trigger_ field is used to dictate when the log file should be rolled. The
only supported trigger is `kind: size`. There is a required field `limit`
which defines the maximum file size prior to a rolling of the file. The limit
field requires one of the following units in bytes, case does not matter:
The _trigger_ field is used to dictate when the log file should be rolled. It
supports two types: `size`, and `time`.

For `size`, it require a _limit_ field. The _limit_ field is a string which defines the maximum file size
prior to a rolling of the file. The limit field requires one of the following
units in bytes, case does not matter:

- b
- kb/kib
Expand All @@ -190,6 +192,47 @@ trigger:
limit: 10 mb
```

For `time`, it has three field, _interval_, _modulate_ and _max_random_delay_.

The _interval_ field is a string which defines the time to roll the
file. The interval field supports the following units(second will be used if the
unit is not specified), case does not matter:

- second[s]
- minute[s]
- hour[s]
- day[s]
- week[s]
- month[s]
- year[s]

> Note: `log4j` treats `Sunday` as the first day of the week, but `log4rs` treats
> `Monday` as the first day of the week, which follows the `chrono` crate
> and the `ISO 8601` standard. So when using `week`, the log file will be rolled
> on `Monday` instead of `Sunday`.

The _modulate_ field is an optional boolean. It indicates whether the interval should
be adjusted to cause the next rollover to occur on the interval boundary. For example,
if the interval is 4 hours and the current hour is 3 am, when true, the first rollover
will occur at 4 am and then next ones will occur at 8 am, noon, 4pm, etc. The default
value is false.

The _max_random_delay_ field is an optional integer. It indicates the maximum number
of seconds to randomly delay a rollover. By default, this is 0 which indicates no
delay. This setting is useful on servers where multiple applications are configured
to rollover log files at the same time and can spread the load of doing so across
time.

i.e.

```yml
trigger:
kind: time
interval: 1 day
modulate: false
max_random_delay: 0
```

The _roller_ field supports two types: delete, and fixed_window. The delete
roller does not take any other configuration fields. The fixed_window roller
supports three fields: pattern, base, and count. The most current log file will
Expand All @@ -202,14 +245,19 @@ that if the file extension of the pattern is `.gz` and the `gzip` Cargo
feature is enabled, the archive files will be gzip-compressed.

> Note: This pattern field is only used for archived files. The `path` field
of the higher level `rolling_file` will be used for the active log file.
> of the higher level `rolling_file` will be used for the active log file.

The _base_ field is the starting index used to name rolling files.

The _count_ field is the exclusive maximum index used to name rolling files.
However, be warned that the roller renames every file when a log rolls over.
Having a large count value can negatively impact performance.

> Note: If you use the `triger: time`, the log file will be rolled before it
> gets written, which ensures that the logs are rolled in the correct position
> instead of leaving a single line of logs in the previous log file. However,
> this may cause a substantial slowdown if the `background` feature is not enabled.

i.e.

```yml
Expand Down
4 changes: 2 additions & 2 deletions examples/compile_time_config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ fn main() {
let config = serde_yaml::from_str(config_str).unwrap();
log4rs::init_raw_config(config).unwrap();

info!("Goes to console");
error!("Goes to console");
info!("Goes to console, file and rolling file");
error!("Goes to console, file and rolling file");
trace!("Doesn't go to console as it is filtered out");
}
41 changes: 31 additions & 10 deletions examples/sample_config.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,33 @@
appenders:
stdout:
kind: console
encoder:
pattern: "{d(%+)(utc)} [{f}:{L}] {h({l})} {M}:{m}{n}"
filters:
- kind: threshold
level: info
stdout:
kind: console
encoder:
pattern: "{d(%+)(utc)} [{f}:{L}] {h({l})} {M}:{m}{n}"
filters:
- kind: threshold
level: info
file:
kind: file
path: "log/file.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
rollingfile:
kind: rolling_file
path: "log/rolling_file.log"
encoder:
pattern: "[{d(%Y-%m-%dT%H:%M:%S%.6f)} {h({l}):<5.5} {M}] {m}{n}"
policy:
trigger:
kind: time
interval: 1 minute
roller:
kind: fixed_window
pattern: "log/old-rolling_file-{}.log"
base: 0
count: 2
root:
level: info
appenders:
- stdout
level: info
appenders:
- stdout
- file
- rollingfile
54 changes: 38 additions & 16 deletions src/append/rolling_file/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -167,22 +167,41 @@ impl Append for RollingFileAppender {
// TODO(eas): Perhaps this is better as a concurrent queue?
let mut writer = self.writer.lock();

let len = {
let writer = self.get_writer(&mut writer)?;
self.encoder.encode(writer, record)?;
writer.flush()?;
writer.len
};
let is_pre_process = self.policy.is_pre_process();
let log_writer = self.get_writer(&mut writer)?;

let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};
if is_pre_process {
let len = log_writer.len;

let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};

// TODO(eas): Idea: make this optionally return a future, and if so, we initialize a queue for
// data that comes in while we are processing the file rotation.

self.policy.process(&mut file)?;

let log_writer_new = self.get_writer(&mut writer)?;
self.encoder.encode(log_writer_new, record)?;
log_writer_new.flush()?;
} else {
self.encoder.encode(log_writer, record)?;
log_writer.flush()?;
let len = log_writer.len;

// TODO(eas): Idea: make this optionally return a future, and if so, we initialize a queue for
// data that comes in while we are processing the file rotation.
self.policy.process(&mut file)
let mut file = LogFile {
writer: &mut writer,
path: &self.path,
len,
};

self.policy.process(&mut file)?;
}

Ok(())
}

fn flush(&self) {}
Expand Down Expand Up @@ -371,8 +390,8 @@ appenders:
path: {0}/foo.log
policy:
trigger:
kind: size
limit: 1024
kind: time
interval: 2 minutes
roller:
kind: delete
bar:
Expand Down Expand Up @@ -405,6 +424,9 @@ appenders:
fn process(&self, _: &mut LogFile) -> anyhow::Result<()> {
Ok(())
}
fn is_pre_process(&self) -> bool {
false
}
}

#[test]
Expand Down
4 changes: 4 additions & 0 deletions src/append/rolling_file/policy/compound/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@ impl Policy for CompoundPolicy {
}
Ok(())
}

fn is_pre_process(&self) -> bool {
self.trigger.is_pre_process()
}
}

/// A deserializer for the `CompoundPolicyDeserializer`.
Expand Down
8 changes: 8 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,18 @@ use crate::config::Deserializable;
#[cfg(feature = "size_trigger")]
pub mod size;

#[cfg(feature = "time_trigger")]
pub mod time;

/// A trait which identifies if the active log file should be rolled over.
pub trait Trigger: fmt::Debug + Send + Sync + 'static {
/// Determines if the active log file should be rolled over.
fn trigger(&self, file: &LogFile) -> anyhow::Result<bool>;

/// Sets the is_pre_process flag for log files.
///
/// Defaults to true for time triggers and false for size triggers
fn is_pre_process(&self) -> bool;
}

#[cfg(feature = "config_parsing")]
Expand Down
15 changes: 15 additions & 0 deletions src/append/rolling_file/policy/compound/trigger/size.rs
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,10 @@ impl Trigger for SizeTrigger {
fn trigger(&self, file: &LogFile) -> anyhow::Result<bool> {
Ok(file.len_estimate() > self.limit)
}

fn is_pre_process(&self) -> bool {
false
}
}

/// A deserializer for the `SizeTrigger`.
Expand Down Expand Up @@ -149,3 +153,14 @@ impl Deserialize for SizeTriggerDeserializer {
Ok(Box::new(SizeTrigger::new(config.limit)))
}
}

#[cfg(test)]
mod test {
use super::*;

#[test]
fn pre_process() {
let trigger = SizeTrigger::new(2048);
assert!(!trigger.is_pre_process());
}
}
Loading

0 comments on commit 6c6ace0

Please sign in to comment.