Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example for cascading drop triggered by eviction #350

Merged
merged 4 commits into from
Dec 28, 2023
Merged

Example for cascading drop triggered by eviction #350

merged 4 commits into from
Dec 28, 2023

Conversation

peter-scholtens
Copy link
Contributor

As the eviction listener cannot guarantee that all references are dropped in time, it is a better idea to use the Drop trait of the cached items for some cases. A web-server may require to hold both a session cache and a (logged-in or related) users tree/map. Also added ttl variable for consistency, and modified sync() comment.

…added with cascading drop triggered by eviction.
@tatsuya6502
Copy link
Member

tatsuya6502 commented Dec 28, 2023

@peter-scholtens

Thank you so much for creating the example. It looks good to me.

However, I found cascading_drop_async panics on arm64 platforms (macOS, Linux) as the followings:

thread 'main' panicked at examples/cascading_drop_async.rs:188:5:
assertion failed: group_tree.lock().unwrap().is_empty()

I believe this is due to the issue in quanta::Instant::elapsed(), which may report a shorter duration than std::time::Instant::elapsed() does. Because of this, cached Sessions will not expire after exact 3 seconds elapsed, but after 3 seconds + some additional time.

To confirm it, I changed the code as the followings and then ran it on x86_64 and arm64 platforms:

--- a/examples/cascading_drop_async.rs
+++ b/examples/cascading_drop_async.rs
@@ -178,8 +178,9 @@ async fn main() {
     }
 
     println!("Waiting");
-    for t in 1..=ttl + 1 {
+    for t in 1..=ttl + 2 {
         sleep(Duration::from_secs(1));
+        println!("t = {}, calling `get` and `run_pending_tasks`", t);
         sessions_cache.get(&0).await;
         sessions_cache.run_pending_tasks().await;
         println!("t = {}, pending: {}", t, sessions_cache.entry_count());

As you can see in the following terminal logs:

  1. On x86_64, sessions were evicted after calling run_pending_tasks at t = 3
  2. On arm64, sessions were evicted after calling run_pending_tasks at t = 4

x86_64-unknown-linux-gnu

User Bob has friends Alice, Charlie,
Found session 6751FC53 from user_id: 1
Found session 4C7CAF90 from user_id: 1
Found session 55679ED1 from user_id: 2
Waiting
t = 1, calling `get` and `run_pending_tasks`
t = 1, pending: 3
t = 2, calling `get` and `run_pending_tasks`
t = 2, pending: 3
t = 3, calling `get` and `run_pending_tasks`
Evicted session with key 6751FC53 of user_id 1 because Expired
Evicted session with key 4C7CAF90 of user_id 1 because Expired
Evicted session with key 55679ED1 of user_id 2 because Expired
Dropping session holding a reference to user 1
Dropping session holding a reference to user 1
Dropping session holding a reference to user 2
t = 3, pending: 0
user id 1 has strong count: 2
user id 1 has strong count: 2
user id 2 has strong count: 1
Dropping user Bob
Send users to verification queue: [1, 3]
user id 1 has strong count: 1
Dropping user Alice
user id 3 has strong count: 1
Dropping user Charlie
t = 4, calling `get` and `run_pending_tasks`
t = 4, pending: 0
t = 5, calling `get` and `run_pending_tasks`
t = 5, pending: 0
Exit program.

aarch64-apple-darwin

User Bob has friends Alice, Charlie,
Found session 6751FC53 from user_id: 1
Found session 55679ED1 from user_id: 2
Found session 4C7CAF90 from user_id: 1
Waiting
t = 1, calling `get` and `run_pending_tasks`
t = 1, pending: 3
t = 2, calling `get` and `run_pending_tasks`
t = 2, pending: 3
t = 3, calling `get` and `run_pending_tasks`
t = 3, pending: 3
t = 4, calling `get` and `run_pending_tasks`
Evicted session with key 6751FC53 of user_id 1 because Expired
Evicted session with key 4C7CAF90 of user_id 1 because Expired
Evicted session with key 55679ED1 of user_id 2 because Expired
Dropping session holding a reference to user 1
Dropping session holding a reference to user 1
Dropping session holding a reference to user 2
t = 4, pending: 0
user id 1 has strong count: 2
user id 1 has strong count: 2
user id 2 has strong count: 1
Dropping user Bob
Send users to verification queue: [1, 3]
user id 1 has strong count: 1
Dropping user Alice
user id 3 has strong count: 1
Dropping user Charlie
t = 5, calling `get` and `run_pending_tasks`
t = 5, pending: 0
Exit program.

I will report the issue to the quanta crate and I hope it will be addressed soon. But I am not sure what to do now on your example to avoid the panic.

One option could be to use a bit shorter ttl value, e.g. 2.9 seconds instead of 3.0. It runs on

--- a/examples/cascading_drop_async.rs
+++ b/examples/cascading_drop_async.rs
@@ -116,10 +116,10 @@ async fn main() {
     });
 
     // Make an artificially small cache and 1-second ttl to observe pruning of the tree.
-    let ttl = 3;
+    let ttl = 2.9f64;
     let sessions_cache = Cache::builder()
         .max_capacity(10)
-        .time_to_live(Duration::from_secs(ttl))
+        .time_to_live(Duration::from_millis((ttl * 1000.0) as u64))
         .eviction_listener(|key, value: Arc<Mutex<Session>>, cause| {
             println!(
                 "Evicted session with key {:08X} of user_id {:?} because {:?}",
@@ -178,7 +178,7 @@ async fn main() {
     }
 
     println!("Waiting");
-    for t in 1..=ttl + 1 {
+    for t in 1..=(ttl.round() as u32 + 1) {
         sleep(Duration::from_secs(1));
         sessions_cache.get(&0).await;
         sessions_cache.run_pending_tasks().await;

Another option could be to call run_pending_tasks() more than once in a second.

And finally, we could leave the example as is and let it panic on arm64 platforms. Maybe we can add some source code comments to explain that it may panic on arm64 due to the issue in quanta::Instant::elapsed().

What do you think?

@peter-scholtens
Copy link
Contributor Author

This is indeed an edge-case, like the difference between >= and >. In my opinion the example should not rely on peculiar choices of the implementation like that. I would suggest to rename the variable let ttl = 3; and express the values in milliseconds like let ttl_ms = 2500;. This avoids all the conversions from usize to u32/u64 too and keeps the code readable. And you can still refer to the issue with a small comment like: Caution: setting ttl to exact integer multiples of the time steps may cause different behavior than you expect, due to rounding or race conditions.

@tatsuya6502
Copy link
Member

Thanks for the opinion! So, how about this?

--- a/examples/cascading_drop_async.rs
+++ b/examples/cascading_drop_async.rs
@@ -115,11 +115,17 @@ async fn main() {
         }
     });
 
 
-    // Make an artificially small cache and 1-second ttl to observe pruning of the tree.
-    let ttl = 3;
+    // Later, we will check the entry count of the session_cache with this time_step
+    // interval.
+    let time_step = 1; // second
+
+    // Make an artificially small cache and 2.5-second ttl to observe pruning of the tree.
+    // Caution: setting ttl to exact integer multiples of the time steps may cause
+    // different behavior than you expect, due to rounding or race conditions.
+    let ttl_ms = 2500;
     let sessions_cache = Cache::builder()
         .max_capacity(10)
-        .time_to_live(Duration::from_secs(ttl))
+        .time_to_live(Duration::from_millis(ttl_ms))
         .eviction_listener(|key, value: Arc<Mutex<Session>>, cause| {
             println!(
                 "Evicted session with key {:08X} of user_id {:?} because {:?}",
@@ -178,8 +184,8 @@ async fn main() {
     }
 
     println!("Waiting");
-    for t in 1..=ttl + 1 {
-        sleep(Duration::from_secs(1));
+    for t in 1..=4 {
+        sleep(Duration::from_secs(time_step));
         sessions_cache.get(&0).await;
         sessions_cache.run_pending_tasks().await;
         println!("t = {}, pending: {}", t, sessions_cache.entry_count());

If this is okay, I will apply it after merging this PR.

@tatsuya6502 tatsuya6502 added this to the v0.12.2 milestone Dec 28, 2023
Copy link
Member

@tatsuya6502 tatsuya6502 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They look good to me. Thanks!

@tatsuya6502 tatsuya6502 merged commit e63c842 into moka-rs:main Dec 28, 2023
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants