Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Redis #68

Closed
omid opened this issue Jan 6, 2021 · 8 comments · Fixed by #72
Closed

Support Redis #68

omid opened this issue Jan 6, 2021 · 8 comments · Fixed by #72

Comments

@omid
Copy link
Contributor

omid commented Jan 6, 2021

I suggest supporting any network-based cache software, like Redis, Memcached, and ...

In the k8s environment, in which you may have several small pods working at the same time, caching locally is not helpful enough and will increase memory usage which may end in OOM Kill, having a shared place to cache the data is helpful.

If you agree, I can start trying to do it.
By adding a new store type, RedisCache which I hope will be similar to TimedCache and UnboundCache.

To support this, I think the return type of the function must implement Serialize and Deserialize.

@omid
Copy link
Contributor Author

omid commented Jan 7, 2021

Another problem is that, the trait is returning references, for example:

    /// Attempt to retrieve a cached value
    fn cache_get(&mut self, k: &K) -> Option<&V>;

So when I fetch a value from Redis, it's initiated inside the function, then it's not possible to return the reference to it!

A solution can be to store them in a hashmap inside the struct and return the ref to that. But I don't know how to clear the memory up after it went out of scope.

@jaemk
Copy link
Owner

jaemk commented Jan 8, 2021

Yeah the reference thing is a problem. The last time I was thinking about this, I was thinking of making a separate trait where the cache_* methods would return a wrapped V, like -> CacheRef<V> which impl's deref -> V. A big downside being that the cached macros need to handle both traits. I guess we could just change the existing Cached trait to return a new wrapper type - the deref impls would make it mostly compatible except in types...

@omid
Copy link
Contributor Author

omid commented Jan 8, 2021

@jaemk I've spent some time on this, couldn't figure it out :(
Can you lead me to the container (smart pointer) you meant?

@seanchen1991
Copy link

Would it make sense to add Redis support behind a feature flag?

So a user would add something like this in their Cargo.toml when they want to include the crate:

[dependencies]
cached = { version = “*”, features = [“redis”] }

I agree that it’s a good thing to include, but it might be a bit much to include it by default.

@omid
Copy link
Contributor Author

omid commented Jun 1, 2021

@seanchen1991 Sure it will be behind a feature flag.
There is a not-finished PR, you can see the implementation. (link to the feature flag)

@PeterGrace
Copy link

I'd love to use the cached crate in a project but am looking for this issue to be solved as I'm deploying pods to kubernetes and need a shared store. Reading this conversation it seemed to stall out around June. There was a mention of #98 and I'm wondering if that discussion is a prerequisite to the implementation of this PR? I'm not the greatest Rust programmer but I'd like to help support this effort somehow.

@omid
Copy link
Contributor Author

omid commented Jan 27, 2022

Hey @PeterGrace, it's kinda a prerequisite for this one.

By now, cached only supports sync storages, because all existing ones are RAM storages and it was not designed to support them.
I think having sync Redis implementation, is easy and we can have it if you need that.
But having a proper async Redis support is not possible now.

@omid
Copy link
Contributor Author

omid commented Jan 27, 2022

@PeterGrace I just pushed the sync redis support to the PR, #72.

@jaemk jaemk closed this as completed in #72 Feb 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants