-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[API Proposal]: Migrate HybridCache
from aspnet to runtime
#100290
Comments
|
@neon-sunset painful how? any as the person who is going to be implementing both sides of this API from the MSFT perspective: I don't see the concern (and I've actively prototyped it, and to emphasize: this is the pattern that we already proved with the output-cache backend) |
Any chance to get rid of the String for your new Interface for the key? ReadOnlySpan still does not work in async methods, right? Many implementations probably convert the string to byte arrays anyway. |
As for backends: I wouldn't presume. A database backend might want
|
Ah, hmpf, GitHub stole the char from my ReadOnlySequence, |
Perhaps a |
Well, in theory I guess we could use ROM-char, and if the input is |
My elevator pitch for |
That is a concern for the backend; the caller doesn't have any insight or interest in the "how"; if |
@Tornhoof ah, your comment makes a lot more sense now; yes, |
Worst case in the future this can be retrofitted through DIM that would perform conversion, should the interface be extended. offtopic: |
HybridCache
from aspnet to runtime
Should For that matter, if the previous answer is no, is there value in |
Happy to refactor as part of throwing this over the fence from aspnet, but I wonder if @stephentoub wants to opine on the choices between T and VT here. |
It's my understanding (which may be wrong) that That said, this is an interface, which means |
In the more generalized case (anything other than bool/void): we would also need to weigh the alloc overhead in the sync case, but yes: in this specific case, we can optimize that away easily enough via reused instances. I'm happy to use Task/Task-bool. It isn't controversial enough for me to get excited about; happy to agree if it unblocks things :) |
The downside of |
For the APIs in question, would it be at all common to write code that wouldn't immediately await the returned task? If every consumer will immediately await, there's relatively little downside to using ValueTask, with the upside that it could in rare-ish cases be made ammortized allocation-free. The main concern with returning ValueTask here is possible misuse, because if someone tries to use it like they might a Task (e.g. storing it into a field and awaiting it multiple times, putting it into a dictionary, wanting to use it as part of a WhenAny, etc.), the results will range from exception highlighting the misuse to spooky failure at a distance. |
In the case of That said: this is a concurrent API, not a sequential API; as such, I do not expect IVTS implementations hence I do not think the alloc-free point is genuine at least in the true async case - nor do I expect many implementations (except perhaps Tsavorite==="MSFT Faster", which I do have a test branch for) to provide synchronous results (where VT also works well, and in the case of For |
namespace Microsoft.Extensions.Caching.Distributed
{
public interface IBufferDistributedCache : IDistributedCache
{
bool TryGet(string key, IBufferWriter<byte> destination);
ValueTask<bool> TryGetAsync(string key, IBufferWriter<byte> destination, CancellationToken token = default);
void Set(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options);
ValueTask SetAsync(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options, CancellationToken token = default);
}
}
namespace Microsoft.Extensions.Caching.Hybrid
{
public partial interface IHybridCacheSerializer<T>
{
T Deserialize(ReadOnlySequence<byte> source);
void Serialize(T value, IBufferWriter<byte> target);
}
public interface IHybridCacheSerializerFactory
{
bool TryCreateSerializer<T>([NotNullWhen(true)] out IHybridCacheSerializer<T>? serializer);
}
public sealed class HybridCacheEntryOptions
{
public TimeSpan? Expiration { get; init; }
public TimeSpan? LocalCacheExpiration { get; init; }
public HybridCacheEntryFlags? Flags { get; init; }
}
[Flags]
public enum HybridCacheEntryFlags
{
None = 0,
DisableLocalCacheRead = 1 << 0,
DisableLocalCacheWrite = 1 << 1,
DisableLocalCache = DisableLocalCacheRead | DisableLocalCacheWrite,
DisableDistributedCacheRead = 1 << 2,
DisableDistributedCacheWrite = 1 << 3,
DisableDistributedCache = DisableDistributedCacheRead | DisableDistributedCacheWrite,
DisableUnderlyingData = 1 << 4,
DisableCompression = 1 << 5,
}
public abstract class HybridCache
{
[SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Delegate differences make this unambiguous")]
public abstract ValueTask<T> GetOrCreateAsync<TState, T>(string key, TState state, Func<TState, CancellationToken, ValueTask<T>> factory,
HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default);
[SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Delegate differences make this unambiguous")]
public ValueTask<T> GetOrCreateAsync<T>(string key, Func<CancellationToken, ValueTask<T>> factory,
HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default)
=> throw new System.NotImplementedException();
public abstract ValueTask SetAsync<T>(string key, T value, HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default);
[SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
public abstract ValueTask RemoveAsync(string key, CancellationToken cancellationToken = default);
[SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
public virtual ValueTask RemoveAsync(IEnumerable<string> keys, CancellationToken cancellationToken = default)
=> throw new System.NotImplementedException();
[SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
public virtual ValueTask RemoveByTagAsync(IEnumerable<string> tags, CancellationToken cancellationToken = default)
=> throw new System.NotImplementedException();
public abstract ValueTask RemoveByTagAsync(string tag, CancellationToken cancellationToken = default);
}
} |
Not sure there's enough complexity to warrant that, but I'll take a look
The main proposed impl here is in another OOB package -
👍
applied in PR |
Video comments (in order): Serialization: by default, both L1 and L2 would use serialization, because the expectation is that many users will be migrating from raw
Template method pattern: by contrast, I'm thinking of this type as an Re factory constructor: if we do that, it would need to be over in (just going through chronologicallly, my apologies for repetition) yes, the plan is for the implementation to remain in aspnetcore, but it is not part of the aspnetcore framework - it is OOB and available to multiple TFMs including netfx and ns2 Re Yes, it is a correct statement that aspnetcore contains the default implementation. Yes, multiple separate components implement Re the topic of things being in abstractions or runtime; ultimately the point of caching-abstractions is to simplify layering:
By comparison: think of Options: the idea is to reuse the policy instance; the DI/creation layer in aspnetcore has facilities to specify the default options to use when none is provided per-call Typical usage:
|
Naming is hard etc: one very minor thing I noticed only now. |
This was discussed via email and the general sentiment was of agreement with the current proposal. Considering that #103103 was also approved, I will go ahead and consider this as api-approved to include it in preview 7. Hopefully, this aligns with the official verdict. |
Based on offline conversation, we've decided to approve this as proposed. |
@eerhardt yes, if that's the correct approach here (I've closed things too soon before, i.e. at the wrong time for the expected shipping process) |
Yep - once the PR (or PRs) have merged into main that add the proposed APIs, we close the API proposal issue. This was implemented by #103103. Closing. |
Background and motivation
Context; this is part of Epic: IDistributedCache updates in .NET 9 and Hybrid Cache API proposal
Was originally just
IDistributedCache
, but this issue now updated to include all of the abstractHybridCache
API (but not the aspnet implementation)HybridCache
is a new cache abstraction that sits on top ofIDistributedCache
(L2) andIMemoryCache
(L1) to provide an integrated cache experience that includes serialization, stampede protection, and a range of other features. The API has been discussed extensively as part of aspnet, most significantly in the aforementioned dotnet/aspnetcore#54647.We would now like to kick the aspnet bits over the fence into Microsoft.Extensions.Caching.Abstractions, so that:
The specific proposed API changes are laid out in #103103
The existing
IDistributedCache
API is based aroundbyte[]
, which is wildly inefficient for anything that isn't an in-memory lookup ofstring
tobyte[]
(i.e. handing back the same array each time, which is itself a bit dangerous because of array mutation).To be 100% explicit:
byte[]
needs to be right-sized, it must be allocated per usage (especially if we want defensive copies)byte[]
demands contiguous memory, which can force LOH etcThe proposal is to add non-allocating APIs, similar to those used for Output Cache in .NET 8, to avoid these allocations; this assists every other backend - Redis, SQL, SQLite, Cosmos, etc.
As an example of the impact of this, see this table (note also the second table in the same comment), where a mocked up version of the API was used to test a FASTER-based cache backend (this is a useful backend because it has very low internal overheads).
API Proposal
(edit: changed name from
cancellationToken
totoken
to mirrorIDistributedCache
, and added= default
)(edit: added the sync paths)
API Usage
The usage of this API is optional; existing backends that implement
IDistributedCache
may choose (or not) to additionally implement the new API. The new "hybrid cache" piece will type-test for the feature, and use it appropriately. Any backends that do not implement the API: continue to work, using thebyte[]
allocation.The design for "set" is simple: the caller owns the memory lifetime via
ReadOnlySequence<byte>
- the backend (as an API contract) is explicitly meant to copy the data out; storing the passed in value is undefined behaviour as that data may go out of scope.The design for "get" is for the caller to handle memory management (which would otherwise be duplicated and brittle in every backend); this is achieved by passing in an
IBufferWriter<byte>
to which the backend can push the data. This also means that the "hybrid cache" piece can handle quotas etc before data is fully read. Thebool
return-value is used to distinguish "found" vs "not found"; this isnull
vsnot null
on the old API, and is necessary because zero bytes is a valid payload length in some formats (I'm looking at you, protobuf).Note that hybrid cache proposal only uses async fetch; however, it is noted that if we only added async methods, this would mean that
IBufferDistributedCache
is "unbalanced" vsIDistributedCache
(sync vs async), and omitting it would limit our options later if we decide to add sync paths on hybrid cache; accordingly, sync get+set paths are included in this proposal.Alternative Designs
Stream
- allocatey, indirect, and multi-copylicious; significant complications for producer and consumerIMemoryOwner<byte>
or similar return - contiguous, caller gets no chance to intercept until all preparedReadOnlySpan<byte>
input (to avoid storage) - contiguous, only applies to "sync"Risks
None; any existing backends not implementing the feature continue to work as current, hopefully adding support in time; there is no additional service registration for this auxiliary API
The text was updated successfully, but these errors were encountered: