-
Notifications
You must be signed in to change notification settings - Fork 979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RedisHashCommands
aloing with its reactive/async/cluster counterparts should allow different types for keys and hash fields
#2801
Comments
That isn't easily achivable as the entire architecture is based on key and value types. If you're looking for different data types, then either using |
Yes, this is a work-around we ended up with eventually. However, it looks like a quite awkward solution – building an encoder/decoder layer on top of another layer that has got its encoder and decoder already.
Could you elaborate a bit more on this approach please? I'm not sure I grasped what you mean. Thank you! |
Actually, this is a good approach, also performance-wise. Having serialization as part of application threads distributes the load and the eventloop is less busy resulting in a much better performance profile.
AsyncCommand<String, String, String> myCommand = new AsyncCommand<>(new Command<>(CommandType.SET,
new StatusOutput<>(StringCodec.ASCII),
new CommandArgs<>(StringCodec.ASCII).addKey("foo").addValue("bar")));
connection.dispatch((RedisCommand) myCommand);
myCommand.join(); |
Hmm.. As far as I can understand the API, a new connection has to be created for every new lettuce/src/main/java/io/lettuce/core/RedisClient.java Lines 215 to 220 in 764fdf3
So multiple |
Codecs are used internally to decode push and pub-sub messages, therefore there is an internal association between codec and connection in addition to sending commands. If you just want to send a command, then using |
@rgohol could we take a step back? What is the direct problem we are trying to address - is it API usability of the driver, is it performance or is it something else? As far as I understand you already have a working solution by using your own decoder; and as far as I understand you have no problems with the performance of this solution. So is it only that you believe it should not be a part of your application, but rather part of the driver? I think answering this question would help us all understand the situation better. |
@tishun , thank you for looking into it. Our concern is the following: We have a working system that makes use of Redis quite a bit. Initially there were mostly regular "string" type data in use (and some others but hashes), and we also implemented a custom serialization schema for the keys. The schema is based on a domain-specific type that allows us to simplify managing Redis keys in the domain-layer code. Now, as the system keeps growing, we realized at some point that it could benefit from employing the "hash" type significantly. In order to make it really performant, we would like to get hash keys (aka fields) and values serialized between source byte arrays and domain data models directly, back and forth (i.e. avoiding any intermediate transformations). The domain data models can be different for different hash fields and values. Moreover, they are always different from the domain type representations of Redis base keys. There's no problem to achieve that for hash values, apparently. But suddenly it does not work out for the base keys and the hash fields married together. Instead, we have to downgrade our key types to byte buffers, then make the hash fields of the byte buffer type as well, and then implement yet another layer of serialization on top of the Lettuce commands (which pretends to have one already). Therefore, the built-in serialization for keys and fields in Lettuce turns out useless for us. The major problem here is that we have to modify our existing code base that has been working for years just in order to adopt one more Redis type. And the other problem - yet another layer to appear just because of the design-enforced restriction in Lettuce. I still believe - it is quite weird that Lettuce artificially restricts base keys and hash fields to the same type. There are no constraints on that in the in the Redis docs per sei, are there? So basically, a bottom line is - yes, we can handle it, but should we? Sorry for the long-read, I appreciate your patience. |
For additional context, there's Spring Data Redis that builds on top of Lettuce and makes a distinction not only between keys and values, but also hash fields and hash values. Effectively, it is a facade that accepts your domain types, applies (de)-serialization and sends the command to the driver. Using different serialization formats isn't uncommon, however the concrete implementation across applications is different. If a driver or something on top serves your purpose, then great. In many cases however, requirements are tied too much to a particular application so it doesn't make sense to extend a library with your specific requirements, instead hosting that kind of custom code in your code base would be a good way. So it always depends. |
Hey @rgohol thanks for explaining this in detail. We actually need that to figure out best if we can help in any way. Don't hesitate to add more information, if we are missing it. And thanks @mp911de for the discussion on the topic and provided ideas. Assuming I understand correct and the major problem is
I can see where the predicament stands. Now from the standpoint of the Lettuce driver it would be close to impossible to support an infinite number of generics for each command (there are commands with even more parameters and one could always argue they need to be able to have different types). Such a change would also be a nightmare from a compatibility perspective for all the existing users of the driver. So the proposed solution is - unfortunately - going against the core design decisions and - as such - would very unlikely be implemented. That does not mean that the use case is an invalid one, but we will have to spend more time thinking how we can solve it in a good way. One thing that @mp911de proposed during our discussion is to consider using dynamic Redis command interfaces if you haven't already. Have you thought about that? As usual - PRs on this topic are welcome, as long as they do not break existing use cases. |
Yes, we have already. Which, by the way, reminded me an old-good aphorism:
However, regardless of all the details of our particular systems and the ways we can work the issue around, I would like to emphasize the primary point of the issue: The Redis API itself does not imply any intersections nor other correlations between base key and hash field namespaces. It leads to a rather simple conclusion: if there's a general-purpose system (like Lettuce) that generalizes Redis keys and values as abstract types (e.g. I mean, it is a very common-sense assumption, not even specific to Redis: if there are sets That said, I totally get the point, that the current design of the driver does not allow to embrace more than two types that easily.
Yeah, I would probably give it a shot. Not sure though if I can put aside enough time to start working on that right away. Thank you for your time! |
Feature Request
It would be great to allow different types for keys and fields for hashes.
For example,
RedisHashCommands
could look like:Naturally, the keys and the fields do not have to have the same type.
Is your feature request related to a problem? Please describe.
We use plain strings for keys and serialized binary data for fields in some of our hash entities.
With current API we have to stick to a work-around: declare both keys and fields as strings and then additionally serialize/deserialize field types in each call methods of
RedisHashCommands
, which is not really very efficient.Describe the solution you'd like
RedisHashCommands
and counterparts for fields.Describe alternatives you've considered
The work-around from the above only.
Teachability, Documentation, Adoption, Migration Strategy
Actually, there could be a separate set of interfaces, like:
That could help with migrations of existing clients.
See also an unanswered discussion: #2094
The text was updated successfully, but these errors were encountered: