-
-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a store that uses Moka #6
Conversation
…s a write-trough cache
This is fantastic! Thank you. 🙏 |
I wonder if there's a way of generalizing this pattern? For instance, I could see using Redis for a similar purpose, a la the caching database backend Django provides. |
Codecov Report
@@ Coverage Diff @@
## main #6 +/- ##
==========================================
- Coverage 66.30% 58.09% -8.21%
==========================================
Files 6 7 +1
Lines 184 210 +26
==========================================
Hits 122 122
- Misses 62 88 +26
|
Sorry about the failing doc test, I should have checked it locally, that was lazy of me. I don't know about how you'd generalize. But I was wondering if implementing as a SessionStore was the most appropriate, or if it'd be better to make it a part of the service somehow. In the end I did it this way because I was unfamiliar with the code base and this way didn't require touching any internals. |
One idea I had was a kind of "store of stores" pattern. I think that would require changing SessionStore tho. For instance perhaps it's there's optional associated type which is also SessionStore that implementors can opt to use in their implementations. (I'm actually not sure if this kind of recursion is allowed by the type system, since this might be self referential.) But that would allow any SessionStore to be used as a cache for another SessionStore. |
It seems it didn't run the integration tests for the new store. I'm not super familiar with github checks. What did I miss to get that to work? |
Oh I see it. It would require refactoring your workflow file, since your file assumes the name of the test file and the feature to enable is the same. But I added two test files, one testing using the MokaStore as a memory only backend, and one testing it wrapping sqlite, and they can't both have the same name. And also for the second one it'd need both the sqlite and moka features, but since you use the same variable for both the name of the test and the feature to enable that won't work. How do you want to handle this? |
Sorry about that--it may not be the best assumption that the test file names will always match in this way. Perhaps we could treat it as a glob or prefix instead? |
features: moka-store | ||
docker: false | ||
- test: moka-store-sqlite | ||
features: moka-store,sqlite-store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is much better than my previous implementation!
It looks like this is passing now (I'm not too worried about codecov, since these integration tests aren't currently tracking coverage). If you agree this looks good, I'm happy to merge it. I'm still mulling over a more generalized pattern which would allow any |
Sure. There are things you could improve, but it works as a basic implementation I think. I haven't done extensive testing, but when running the counting example against a remote postgres database over the internet the page load is reduced from ~60ms to ~30ms with the cache, so it seems useful. It would probably be possible to get that close to 0ms by having the store to the remote db happen in the background and returning early. But you'd give up being able to report back any database errors to the caller. Still, I wonder if having that as an option would be desirable. Not as crucial though, as in a real life scenario reads are probably much more common than writes. |
50% improvement is
I'm certainly not opposed to this. |
As for generalizing, I just got an idea. It wouldn't take long to implement, so I'll be back with a WIP PR shortly. |
Nevermind, it did not pan out. |
I think I might have something that seems to work: it's a new struct that holds two SessionStore (one a cache one the backing store). It implements SessionStore itself so it can be used directly as the store with session manager. I'll share the diff here in a bit. |
Here's a diff of a sketch I think could work: diff --git a/src/session_store.rs b/src/session_store.rs
index 45cfa87..ce2e7ec 100644
--- a/src/session_store.rs
+++ b/src/session_store.rs
@@ -1,4 +1,5 @@
//! An arbitrary store which houses the session data.
+
use async_trait::async_trait;
use crate::session::{Session, SessionId, SessionRecord};
@@ -18,3 +19,77 @@ pub trait SessionStore: Clone + Send + Sync + 'static {
/// A method for deleting a session from a store.
async fn delete(&self, session_id: &SessionId) -> Result<(), Self::Error>;
}
+
+#[derive(Debug, Clone)]
+struct CachingSessionStore<C: SessionStore, S: SessionStore> {
+ cache: C,
+ store: S,
+}
+
+#[derive(thiserror::Error, Debug)]
+enum CachingStoreError<Cache: SessionStore, Store: SessionStore> {
+ #[error(transparent)]
+ Cache(Cache::Error),
+
+ #[error(transparent)]
+ Store(Store::Error),
+}
+
+#[async_trait]
+impl<Cache, Store> SessionStore for CachingSessionStore<Cache, Store>
+where
+ Cache: SessionStore + std::fmt::Debug, // TODO: Why is this required to be Debug?
+ Store: SessionStore + std::fmt::Debug,
+{
+ type Error = CachingStoreError<Cache, Store>;
+
+ async fn save(&self, session_record: &SessionRecord) -> Result<(), Self::Error> {
+ self.store
+ .save(session_record)
+ .await
+ .map_err(Self::Error::Store)?;
+ self.cache
+ .save(session_record)
+ .await
+ .map_err(Self::Error::Cache)?;
+ Ok(())
+ }
+
+ async fn load(&self, session_id: &SessionId) -> Result<Option<Session>, Self::Error> {
+ let session = match self.cache.load(session_id).await {
+ Ok(Some(session)) => Some(session),
+
+ Ok(None) => {
+ let session = self
+ .store
+ .load(session_id)
+ .await
+ .map_err(Self::Error::Store)?;
+ if let Some(session) = session.clone() {
+ let session_record = (&session).into();
+ self.cache
+ .save(&session_record)
+ .await
+ .map_err(Self::Error::Cache)?;
+ }
+ session
+ }
+
+ Err(err) => return Err(Self::Error::Cache(err)),
+ };
+
+ Ok(session)
+ }
+
+ async fn delete(&self, session_id: &SessionId) -> Result<(), Self::Error> {
+ self.store
+ .delete(session_id)
+ .await
+ .map_err(Self::Error::Store)?;
+ self.cache
+ .delete(session_id)
+ .await
+ .map_err(Self::Error::Cache)?;
+ Ok(())
+ }
+} |
Looks reasonable to me. And if you wanted more than two for some reason you could chain them together cons-list style. |
This implements a SessionStore that that uses Moka, a high-quality in-process concurrent memory cache. The store takes another SessionStore as an argument when created, which it acts as a wrapper for. It will act as a write-trough cache: when loading a session it will read from the cache and only reach out to the wrapped SessionStore if there is a cache miss. When saving it will put the value both into the cache and push out to the wrapped SessionStore. So essentially it saves a database roundtrip when you load a session, since typically the value will be in the cache. It also intentionally caches negative lookups, to avoid having to hit the database if the same non-existant session id is checked multiple times.
I also made it possible to use it as a pure memory store, since it only took a couple of extra lines of code to do so.