Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[API Proposal]: Migrate HybridCache from aspnet to runtime #100290

Closed
mgravell opened this issue Mar 26, 2024 · 27 comments
Closed

[API Proposal]: Migrate HybridCache from aspnet to runtime #100290

mgravell opened this issue Mar 26, 2024 · 27 comments
Labels
api-approved API was approved in API review, it can be implemented area-Extensions-Caching blocking Marks issues that we want to fast track in order to unblock other important work
Milestone

Comments

@mgravell
Copy link
Member

mgravell commented Mar 26, 2024

Background and motivation

Context; this is part of Epic: IDistributedCache updates in .NET 9 and Hybrid Cache API proposal

Was originally just IDistributedCache, but this issue now updated to include all of the abstract HybridCache API (but not the aspnet implementation)

HybridCache is a new cache abstraction that sits on top of IDistributedCache (L2) and IMemoryCache (L1) to provide an integrated cache experience that includes serialization, stampede protection, and a range of other features. The API has been discussed extensively as part of aspnet, most significantly in the aforementioned dotnet/aspnetcore#54647.

We would now like to kick the aspnet bits over the fence into Microsoft.Extensions.Caching.Abstractions, so that:

  • backend implementations such as Redis, SQL, etc do not need an aspnet framework reference
  • applications other than aspnet can consume the APIs
  • other non-aspnet implementations are possible (in particular, FusionCache have expressed interest)

The specific proposed API changes are laid out in #103103


The existing IDistributedCache API is based around byte[], which is wildly inefficient for anything that isn't an in-memory lookup of string to byte[] (i.e. handing back the same array each time, which is itself a bit dangerous because of array mutation).

To be 100% explicit:

  • because the byte[] needs to be right-sized, it must be allocated per usage (especially if we want defensive copies)
  • even if we knew the length, to use an array-segment efficiently we would also need to agree a recycling strategy between caller and callee
  • byte[] demands contiguous memory, which can force LOH etc

The proposal is to add non-allocating APIs, similar to those used for Output Cache in .NET 8, to avoid these allocations; this assists every other backend - Redis, SQL, SQLite, Cosmos, etc.

As an example of the impact of this, see this table (note also the second table in the same comment), where a mocked up version of the API was used to test a FASTER-based cache backend (this is a useful backend because it has very low internal overheads).

| Method         | KeyLength | PayloadLength | Mean        | Error       | StdDev      | Gen0   | Gen1   | Allocated |
|--------------- |---------- |-------------- |------------:|------------:|------------:|-------:|-------:|----------:|
| Get            | 128       | 10240         |    576.0 ns |     9.79 ns |     5.83 ns | 0.6123 |      - |   10264 B |
| Set            | 128       | 10240         |    882.0 ns |    23.99 ns |    22.44 ns | 0.6123 |      - |   10264 B |
| GetAsync       | 128       | 10240         |    657.6 ns |    16.96 ns |    14.16 ns | 0.6189 |      - |   10360 B |
| SetAsync       | 128       | 10240         |  1,094.7 ns |    55.15 ns |    51.58 ns | 0.6123 |      - |   10264 B |
|                |           |               |             |             |             |        |        |           |
| GetBuffer      | 128       | 10240         |    366.1 ns |     6.22 ns |     5.20 ns |      - |      - |         - |
| SetBuffer      | 128       | 10240         |    495.4 ns |     7.11 ns |     2.54 ns |      - |      - |         - |
| GetAsyncBuffer | 128       | 10240         |    387.9 ns |     7.60 ns |     1.97 ns | 0.0014 |      - |      24 B |
| SetAsyncBuffer | 128       | 10240         |    649.9 ns |    12.70 ns |    11.88 ns |      - |      - |         - |

API Proposal

// add extension API for existing IDistributedCache, to avoid byte[] overheads
namespace Microsoft.Extensions.Caching.Distributed;

public interface IBufferDistributedCache : IDistributedCache
{
    bool TryGet(string key, IBufferWriter<byte> destination);
    ValueTask<bool> TryGetAsync(string key, IBufferWriter<byte> destination, CancellationToken token = default);
    void Set(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options);
    ValueTask SetAsync(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options, CancellationToken token = default);
}

// define abstract API for new HybridCache system
namespace Microsoft.Extensions.Caching.Hybrid;

public abstract class HybridCache
{
    public abstract ValueTask<T> GetOrCreateAsync<TState, T>(string key, TState state, Func<TState, CancellationToken, ValueTask<T>> factory, 
        HybridCacheEntryOptions? options = null, IEnumerable<string>? tags = null, CancellationToken cancellationToken = default);
    public ValueTask<T> GetOrCreateAsync<T>(string key, Func<CancellationToken, ValueTask<T>> factory, 
        HybridCacheEntryOptions? options = null, IEnumerable<string>? tags = null, CancellationToken cancellationToken = default);
    public abstract ValueTask SetAsync<T>(string key, T value, HybridCacheEntryOptions? options = null, IEnumerable<string>? tags = null, CancellationToken cancellationToken = default);
    public abstract ValueTask RemoveAsync(string key, CancellationToken cancellationToken = default);
    public virtual ValueTask RemoveAsync(IEnumerable<string> keys, CancellationToken cancellationToken = default);
    public virtual ValueTask RemoveByTagAsync(IEnumerable<string> tags, CancellationToken cancellationToken = default);
    public abstract ValueTask RemoveByTagAsync(string tag, CancellationToken cancellationToken = default);
}

public sealed class HybridCacheEntryOptions
{
    public TimeSpan? Expiration { get; init; }
    public TimeSpan? LocalCacheExpiration { get; init; }
    public HybridCacheEntryFlags? Flags { get; init; }
}

[Flags]
public enum HybridCacheEntryFlags
{
    None = 0,
    DisableLocalCacheRead = 1 << 0,
    DisableLocalCacheWrite = 1 << 1,
    DisableLocalCache = DisableLocalCacheRead | DisableLocalCacheWrite,
    DisableDistributedCacheRead = 1 << 2,
    DisableDistributedCacheWrite = 1 << 3,
    DisableDistributedCache = DisableDistributedCacheRead | DisableDistributedCacheWrite,
    DisableUnderlyingData = 1 << 4,
    DisableCompression = 1 << 5,
}

public interface IHybridCacheSerializer<T>
{
    T Deserialize(ReadOnlySequence<byte> source);
    void Serialize(T value, IBufferWriter<byte> target);
}

public interface IHybridCacheSerializerFactory
{
    bool TryCreateSerializer<T>([NotNullWhen(true)] out IHybridCacheSerializer<T>? serializer);
}

(edit: changed name from cancellationToken to token to mirror IDistributedCache, and added = default)
(edit: added the sync paths)

API Usage

The usage of this API is optional; existing backends that implement IDistributedCache may choose (or not) to additionally implement the new API. The new "hybrid cache" piece will type-test for the feature, and use it appropriately. Any backends that do not implement the API: continue to work, using the byte[] allocation.

The design for "set" is simple: the caller owns the memory lifetime via ReadOnlySequence<byte> - the backend (as an API contract) is explicitly meant to copy the data out; storing the passed in value is undefined behaviour as that data may go out of scope.

The design for "get" is for the caller to handle memory management (which would otherwise be duplicated and brittle in every backend); this is achieved by passing in an IBufferWriter<byte> to which the backend can push the data. This also means that the "hybrid cache" piece can handle quotas etc before data is fully read. The bool return-value is used to distinguish "found" vs "not found"; this is null vs not null on the old API, and is necessary because zero bytes is a valid payload length in some formats (I'm looking at you, protobuf).


Note that hybrid cache proposal only uses async fetch; however, it is noted that if we only added async methods, this would mean that IBufferDistributedCache is "unbalanced" vs IDistributedCache (sync vs async), and omitting it would limit our options later if we decide to add sync paths on hybrid cache; accordingly, sync get+set paths are included in this proposal.

Alternative Designs

  • Stream - allocatey, indirect, and multi-copylicious; significant complications for producer and consumer
  • IMemoryOwner<byte> or similar return - contiguous, caller gets no chance to intercept until all prepared
  • ReadOnlySpan<byte> input (to avoid storage) - contiguous, only applies to "sync"
  • default interface methods rather than new interface - would be a similar API change, but would involve an extra memcpy and lease for each fetch (default implementation would be "get array, write array to buffer-writer, which in turn needs to lease"); it is preferable to type test instead (once at setup), and use the most efficient strategy
  • naming... yeah, I'm open to offers

Risks

None; any existing backends not implementing the feature continue to work as current, hopefully adding support in time; there is no additional service registration for this auxiliary API

@mgravell mgravell added the api-suggestion Early API idea and discussion, it is NOT ready for implementation label Mar 26, 2024
@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Mar 26, 2024
@dotnet-policy-service dotnet-policy-service bot added the untriaged New issue has not been triaged by the area owner label Mar 26, 2024
@MihaZupan MihaZupan added area-Extensions-Caching and removed needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners labels Mar 26, 2024
@neon-sunset
Copy link
Contributor

neon-sunset commented Mar 26, 2024

ReadOnlySequence<byte> for SetAsync may be a bit painful. Offering ReadOnlyMemory<byte> overloads would certainly help.

@mgravell
Copy link
Member Author

mgravell commented Mar 26, 2024

@neon-sunset painful how? any ReadOnlyMemory<byte> can be a single-segment ReadOnlySequence<byte> for free, but the reverse is not true; allowing us to use non-contiguous memory is a strongly desirable capability for the usage scenarios, giving us much more flexibility; from a consumption side, there is basically no difference - if the consumer needs it linearized, they can do that themselves at the point they need it; or they can defer and handle the chunks independently

as the person who is going to be implementing both sides of this API from the MSFT perspective: I don't see the concern (and I've actively prototyped it, and to emphasize: this is the pattern that we already proved with the output-cache backend)

@Tornhoof
Copy link
Contributor

Any chance to get rid of the String for your new Interface for the key? ReadOnlySpan still does not work in async methods, right?
ReadOnlySequence is weird.

Many implementations probably convert the string to byte arrays anyway.

@mgravell
Copy link
Member Author

mgravell commented Mar 26, 2024

string is, I think, unavoidable. It is the lingua franca for cache keys, and is going to be used everywhere else - including at the primary caller input - they're not going to do a complex lease, write, release; they're going to do $"/food/{id}". I did consider ReadOnlyMemory<char>, but it makes everything unnecessarily obtuse. I guess if we had generator support the ROM-char approach might apply, but... I don't think it is the real cost here, and the L1 etc will want string.

As for backends: I wouldn't presume. A database backend might want string; Redis is fine with string, etc.

ReadOnlySequence<byte> may be "weird", but it is absolutely the type I would expect in modern .NET for library-level (not application-level) "BLOB that might be non-trivial size".

@Tornhoof
Copy link
Contributor

Ah, hmpf, GitHub stole the char from my ReadOnlySequence, ReadOnlySequence<char> instead of string would be weird.

@neon-sunset
Copy link
Contributor

neon-sunset commented Mar 26, 2024

Perhaps a ReadOnlyMemory<char>-based key overloads could be offered still? In particular, ReadOnlyMemory<char> would allow allocation-free lookup by interpolated key through passing a ROM<char> from an interpolated handler and then returning its buffer back to the pool. I assume going through UTF-16 is expected tradeoff given IO-bound nature of many access patterns (but if ReadOnlyMemory<byte> for keys is on the table, that would be awesome!).

@mgravell
Copy link
Member Author

mgravell commented Mar 26, 2024

Well, in theory I guess we could use ROM-char, and if the input is string, use AsMemory - and reversing that, there is MemoryMarshal.TryGetString. So yes it is possible, and doesn't necessarily involve an additional unnecessary string. It is a relevant design question. I think ROM-byte for the key is too far.

@neon-sunset
Copy link
Contributor

neon-sunset commented Mar 26, 2024

My elevator pitch for ROM<byte>-based key is that IBufferDistributedCache most likely will be serviced by back-ends that talk UTF-8. As of .NET 8, a lot of performance-oriented APIs do provide UTF-8 overloads. With that said, the escape hatch for this could be ROM<char> as it would allow to avoid unnecessary allocation traffic (through pooling the buffers for encoded keys).

@mgravell
Copy link
Member Author

mgravell commented Mar 27, 2024

most likely will be serviced by back-ends that talk UTF-8

That is a concern for the backend; the caller doesn't have any insight or interest in the "how"; if UTF8String had landed: it might be a different conversation

@mgravell
Copy link
Member Author

mgravell commented Mar 27, 2024

@Tornhoof ah, your comment makes a lot more sense now; yes, ReadOnlySequence<char> key would be very weird: total agreement; we expect keys to be "reasonably" sized, where "reasonable" means at a minimum "fine in a contiguous chunk"

@neon-sunset
Copy link
Contributor

neon-sunset commented Mar 27, 2024

most likely will be serviced by back-ends that talk UTF-8

That is a concern for the backend; the caller doesn't have any insight or interest in the "how"; if UTF8String had landed: it might be a different conversation

Worst case in the future this can be retrofitted through DIM that would perform conversion, should the interface be extended.

offtopic: UTF8String is likely to never land given current state of post-.NET 5 conclusion but all the facilities to define one outside of CoreLib are pretty much in place through u8 literals, IUtf8SpanParsable<T>, IUtf8SpanFormattable and all relevant APIs providing ROS/ROM<byte> overloads. Though I do have personal bias in this so take it with a grain of salt 😄

@mgravell mgravell changed the title [API Proposal]: buffer-based distributed caching API - [API Proposal]: Migrate HybridCache from aspnet to runtime Jun 5, 2024
@mgravell mgravell added the api-ready-for-review API is ready for review, it is NOT ready for implementation label Jun 6, 2024
@eerhardt eerhardt added the blocking Marks issues that we want to fast track in order to unblock other important work label Jun 21, 2024
@eerhardt eerhardt added this to the 9.0.0 milestone Jun 21, 2024
@dotnet-policy-service dotnet-policy-service bot removed the untriaged New issue has not been triaged by the area owner label Jun 21, 2024
@brantburnett
Copy link
Contributor

Should SetAsync be returning ValueTask? Are we expecting backing implementations to use IValueTaskSource in their implementation? If not, I thought the guidance was to use Task as the return type since it can return the singleton Task.CompletedTask allocation-free.

For that matter, if the previous answer is no, is there value in GetAsync returning ValueTask<bool>? An async method returning bool that completes synchronously uses Task.FromResult(bool), which also returns a cached singleton on modern .NET. This may be a bit too esoteric though.

@mgravell
Copy link
Member Author

mgravell commented Jun 23, 2024

default or ValueTask.CompletedTask (same thing): also allocation free

Happy to refactor as part of throwing this over the fence from aspnet, but I wonder if @stephentoub wants to opine on the choices between T and VT here.

@brantburnett
Copy link
Contributor

default or ValueTask.CompletedTask (same thing): also allocation free

Happy to refactor as part of throwing this over the fence from aspnet, but I wonder if @stephentoub wants to opine on the choices between T and VT here.

It's my understanding (which may be wrong) that ValueTask has additional overhead that makes it slightly slower than Task. Plus there are additional semantics involved (ValueTask is required to be awaited only once, whereas Task may be awaited multiple times). Therefore ValueTask shouldn't be used unless IValueTaskSource is being used to deliver heap allocation benefits in the async case.

That said, this is an interface, which means ValueTask may be preferable to give the implementor flexibility to use IValueTaskSource. I wanted to point it out to ensure it was an intentional decision. Or so I can be told I'm wrong, if that's the case.

@mgravell
Copy link
Member Author

mgravell commented Jun 24, 2024

In the more generalized case (anything other than bool/void): we would also need to weigh the alloc overhead in the sync case, but yes: in this specific case, we can optimize that away easily enough via reused instances. I'm happy to use Task/Task-bool. It isn't controversial enough for me to get excited about; happy to agree if it unblocks things :)

@neon-sunset
Copy link
Contributor

The downside of Task<T> is that there is no PoolingAsyncValueTaskMethodBuilder variant for plain tasks, which isn't normally an issue but with hot yet asynchronously yielding calls this limits optimization options. Hopefully the need for that will go away once Runtime Handled Tasks experiment becomes part of runtime but for now this remains a concern.

@stephentoub
Copy link
Member

For the APIs in question, would it be at all common to write code that wouldn't immediately await the returned task? If every consumer will immediately await, there's relatively little downside to using ValueTask, with the upside that it could in rare-ish cases be made ammortized allocation-free. The main concern with returning ValueTask here is possible misuse, because if someone tries to use it like they might a Task (e.g. storing it into a field and awaiting it multiple times, putting it into a dictionary, wanting to use it as part of a WhenAny, etc.), the results will range from exception highlighting the misuse to spooky failure at a distance.

@mgravell
Copy link
Member Author

mgravell commented Jun 26, 2024

In the case of IBufferDistributedCache: Ultimately the hope is that there are very few direct consumers of this API - basically just HybridCache, with this API primarily existing for different backends (SQL Server, Redis, CosmosDB, etc) to have the opportunity to advertise a feature to HybridCache. Since this is an optional API, I do not expect any (𝅘𝅥𝅯𝆕 or hardly any 𝅘𝅥𝅯𝆕) end user code to use this API directly, so this risk of misuse here is basically zero.

That said: this is a concurrent API, not a sequential API; as such, I do not expect IVTS implementations hence I do not think the alloc-free point is genuine at least in the true async case - nor do I expect many implementations (except perhaps Tsavorite==="MSFT Faster", which I do have a test branch for) to provide synchronous results (where VT also works well, and in the case of bool: this can be optimized trivially).


For HybridCache itself: IMO we need the VT for the fast-path "I got your value from L1" result

@stephentoub stephentoub removed the api-suggestion Early API idea and discussion, it is NOT ready for implementation label Jun 28, 2024
@bartonjs
Copy link
Member

bartonjs commented Jul 9, 2024

Video

  • Consider moving HybridCache virtual/abstract members to the template method pattern
  • Do we have any implementations of HybridCache that we're making public? What layer(s)? How does someone instantiate one (without DI)?
  • It was asked if all the (non-generic) ValueTask methods should use Task, and since it feels like the usage does not expect parallel calls from a single caller (followed by e.g. WaitAll), ValueTask is fine.
  • CancellationToken parameters should be named cancellationToken. IBufferDistributedCache extends a type that did this wrong, so it should maintain local consistency.
namespace Microsoft.Extensions.Caching.Distributed
{
    public interface IBufferDistributedCache : IDistributedCache
    {
        bool TryGet(string key, IBufferWriter<byte> destination);
        ValueTask<bool> TryGetAsync(string key, IBufferWriter<byte> destination, CancellationToken token = default);
        void Set(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options);
        ValueTask SetAsync(string key, ReadOnlySequence<byte> value, DistributedCacheEntryOptions options, CancellationToken token = default);
    }
}
namespace Microsoft.Extensions.Caching.Hybrid
{
    public partial interface IHybridCacheSerializer<T>
    {
        T Deserialize(ReadOnlySequence<byte> source);
        void Serialize(T value, IBufferWriter<byte> target);
    }
    public interface IHybridCacheSerializerFactory
    {
        bool TryCreateSerializer<T>([NotNullWhen(true)] out IHybridCacheSerializer<T>? serializer);
    }
    public sealed class HybridCacheEntryOptions
    {
        public TimeSpan? Expiration { get; init; }
        public TimeSpan? LocalCacheExpiration { get; init; }
        public HybridCacheEntryFlags? Flags { get; init; }
    }
    [Flags]
    public enum HybridCacheEntryFlags
    {
        None = 0,
        DisableLocalCacheRead = 1 << 0,
        DisableLocalCacheWrite = 1 << 1,
        DisableLocalCache = DisableLocalCacheRead | DisableLocalCacheWrite,
        DisableDistributedCacheRead = 1 << 2,
        DisableDistributedCacheWrite = 1 << 3,
        DisableDistributedCache = DisableDistributedCacheRead | DisableDistributedCacheWrite,
        DisableUnderlyingData = 1 << 4,
        DisableCompression = 1 << 5,
    }
    public abstract class HybridCache
    {
        [SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Delegate differences make this unambiguous")]
        public abstract ValueTask<T> GetOrCreateAsync<TState, T>(string key, TState state, Func<TState, CancellationToken, ValueTask<T>> factory,
            HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default);
        [SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Delegate differences make this unambiguous")]
        public ValueTask<T> GetOrCreateAsync<T>(string key, Func<CancellationToken, ValueTask<T>> factory,
            HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default)
            => throw new System.NotImplementedException();

        public abstract ValueTask SetAsync<T>(string key, T value, HybridCacheEntryOptions? options = null, IReadOnlyCollection<string>? tags = null, CancellationToken cancellationToken = default);

        [SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
        public abstract ValueTask RemoveAsync(string key, CancellationToken cancellationToken = default);

        [SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
        public virtual ValueTask RemoveAsync(IEnumerable<string> keys, CancellationToken cancellationToken = default)
            => throw new System.NotImplementedException();

       [SuppressMessage("ApiDesign", "RS0026:Do not add multiple public overloads with optional parameters", Justification = "Not ambiguous in context")]
        public virtual ValueTask RemoveByTagAsync(IEnumerable<string> tags, CancellationToken cancellationToken = default)
            => throw new System.NotImplementedException();
        public abstract ValueTask RemoveByTagAsync(string tag, CancellationToken cancellationToken = default);
    }
}

@bartonjs bartonjs added api-needs-work API needs work before it is approved, it is NOT ready for implementation and removed api-ready-for-review API is ready for review, it is NOT ready for implementation labels Jul 9, 2024
@mgravell
Copy link
Member Author

mgravell commented Jul 10, 2024

Consider moving HybridCache virtual/abstract members to the template method pattern

Not sure there's enough complexity to warrant that, but I'll take a look

Do we have any implementations of HybridCache that we're making public? What layer(s)? How does someone instantiate one (without DI)?

The main proposed impl here is in another OOB package - Microsoft.Extensions.Caching.Hybrid, owned by the aspnetcore repo; the DI approach here is services.AddHybridCache([...]). There is currently no non-DI mechanism proposed for instantiating the concrete impl, but this could be tweaked. The impl is currently internal

It was asked if all the (non-generic) ValueTask methods should use Task, and since it feels like the usage does not expect parallel calls from a single caller (followed by e.g. WaitAll), ValueTask is fine.

👍

CancellationToken parameters should be named cancellationToken. IBufferDistributedCache extends a type that did this wrong, so it should maintain local consistency.

applied in PR

@mgravell
Copy link
Member Author

mgravell commented Jul 10, 2024

Video comments (in order):

Serialization: by default, both L1 and L2 would use serialization, because the expectation is that many users will be migrating from raw IDistributedCache; we don't want any surprises with mutable / shared instances. However, in the impl bits (not this PR): if we detect [ImmutableObject(true)], we will use object caching in the L1 (and serialization in the L2 obviously)

ValueTask etc: yes, the expectation is prompt await; most of the more nuanced scenarios are already encapsulated in the library, and in any remaining (rare) cases: the consumer can use Task-wrapping

Template method pattern: by contrast, I'm thinking of this type as an interface but with more flexibility for future addition (we can't use default interface methods because of the TFM spread); we are shipping one implementation in-box (aspnetcore), and we anticipate FusionCache (3rd party) also implementing it.

Re factory constructor: if we do that, it would need to be over in Microsoft.Extensions.Caching.Hybrid - we do not want to force all the Microsoft.Extensions.Caching.Hybrid dependencies into Microsoft.Extensions.Caching.Abstractions. Happy to consider factory methods in Microsoft.Extensions.Caching.Hybrid. Again, think of this type as an "interface with benefits".

(just going through chronologicallly, my apologies for repetition) yes, the plan is for the implementation to remain in aspnetcore, but it is not part of the aspnetcore framework - it is OOB and available to multiple TFMs including netfx and ns2


Re IBufferDistributedCache being not mentioned other than the impl: as noted, it is consumed by the HybridCache impl; the types that implement that are things like "Microsoft.Extensions.Caching.StackExchangeRedis", "Microsoft.Extensions.Caching.SqlServer", or CosmosDB, or NCache, etc - all the different 1st and 3rd party cache backends; to confirm (stated in meeting) aspnetcore have implemented the new IBufferDistributedCache for redis (including Garnet, ValKey, etc) and SQL Server.

Yes, it is a correct statement that aspnetcore contains the default implementation.

Yes, multiple separate components implement IBufferDistributedCache; currently all 1st party, but hopefully growing once the API is available in a sensible package.


Re the topic of things being in abstractions or runtime; ultimately the point of caching-abstractions is to simplify layering:

  • we need the IBufferDistributedCache in abstractions so that 1st and 3rd party backend cache providers (redis, SQL Server, NCache, CosmosDB, sqlite, Tsavorite, etc) can implement the service without needing to take the (non-trivial) dependency graph of a full impl (or indeed, any specific impl)
  • we want HybridCache in abstractions so that we can have both our 1st-party (aspnetcore) and 3rd party (FusionCache, etc) implementations of the API, and to allow consumers to work against the API without thinking about the concrete impl

By comparison: think of IMemoryCache in abstractions, vs MemoryCache in Microsoft.Extensions.Caching.Memory; very similar intentions and aims; HybridCache compares to IMemoryCache - with the real type being over in aspnetcore (emphasis: OOB, multi-TFM)


Options: the idea is to reuse the policy instance; the DI/creation layer in aspnetcore has facilities to specify the default options to use when none is provided per-call


Typical usage:

  • app runtime code
    • refs Microsoft.Extensions.Caching.Abstractions
    • consumes HybridCache via DI
  • app setup code
    • calls a HybridCache DI registration, for example AddHybridCache(...) via Microsoft.Extensions.Caching.Hybrid
    • optionally calls an IDistributedCache DI registration, for example AddStackExchangeRedisCache(...) via Microsoft.Extensions.Caching.StackExchangeRedis
  • backend cache implementations (for arbitrary storage layers)
    • refs Microsoft.Extensions.Caching.Abstractions
    • implements IDistributedCache
    • optionally implements IBufferDistributedCache

@jodydonetti
Copy link
Contributor

Naming is hard etc: one very minor thing I noticed only now.
Shouldn't it be IBufferedDistributedCache instead of IBufferDistributedCache (notice the "ed" after "IBuffer")?

@jozkee jozkee added api-ready-for-review API is ready for review, it is NOT ready for implementation and removed api-needs-work API needs work before it is approved, it is NOT ready for implementation labels Jul 16, 2024
@jozkee
Copy link
Member

jozkee commented Jul 22, 2024

This was discussed via email and the general sentiment was of agreement with the current proposal. Considering that #103103 was also approved, I will go ahead and consider this as api-approved to include it in preview 7. Hopefully, this aligns with the official verdict.

@jozkee jozkee added api-approved API was approved in API review, it can be implemented api-ready-for-review API is ready for review, it is NOT ready for implementation and removed api-ready-for-review API is ready for review, it is NOT ready for implementation api-approved API was approved in API review, it can be implemented labels Jul 22, 2024
@terrajobst
Copy link
Member

Based on offline conversation, we've decided to approve this as proposed.

@terrajobst terrajobst added api-approved API was approved in API review, it can be implemented and removed api-ready-for-review API is ready for review, it is NOT ready for implementation labels Jul 22, 2024
@eerhardt
Copy link
Member

@mgravell - can this be closed now that #103103 is merged?

@mgravell
Copy link
Member Author

@eerhardt yes, if that's the correct approach here (I've closed things too soon before, i.e. at the wrong time for the expected shipping process)

@eerhardt
Copy link
Member

Yep - once the PR (or PRs) have merged into main that add the proposed APIs, we close the API proposal issue.

This was implemented by #103103. Closing.

@github-actions github-actions bot locked and limited conversation to collaborators Aug 31, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-approved API was approved in API review, it can be implemented area-Extensions-Caching blocking Marks issues that we want to fast track in order to unblock other important work
Projects
None yet
Development

No branches or pull requests