-
Notifications
You must be signed in to change notification settings - Fork 1
feat(llc): Add caching for own capabilities #47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
84a020b
da39aad
bf00462
a98d4ab
f6dbc1e
932c516
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,98 @@ | ||
| import 'dart:async'; | ||
|
|
||
| import '../../stream_feeds.dart' as api; | ||
| import '../../stream_feeds.dart'; | ||
| import '../utils/batcher.dart'; | ||
|
|
||
| class CapabilitiesRepository { | ||
| CapabilitiesRepository(api.DefaultApi api) : _api = api; | ||
|
|
||
| final api.DefaultApi _api; | ||
| final Map<String, List<FeedOwnCapability>> _capabilities = {}; | ||
|
|
||
| late final Batcher<String, Result<Map<String, List<FeedOwnCapability>>>> | ||
| _fetchBatcher = Batcher( | ||
| action: (feeds) => _fetchWithRetry(feeds: feeds), | ||
| ); | ||
|
|
||
| Future<Result<Map<String, List<FeedOwnCapability>>>> fetchCapabilities({ | ||
| required List<String> feeds, | ||
| }) async { | ||
| final result = await _api.ownCapabilitiesBatch( | ||
| ownCapabilitiesBatchRequest: api.OwnCapabilitiesBatchRequest( | ||
| feeds: feeds, | ||
| ), | ||
| ); | ||
|
|
||
| return result.map((response) { | ||
| _mergeWithCache(response.capabilities); | ||
| return response.capabilities; | ||
| }); | ||
| } | ||
|
|
||
| void cacheCapabilitiesForFeeds(List<FeedData> feeds) { | ||
| for (final feed in feeds) { | ||
| cacheCapabilities(feed.id, feed.ownCapabilities); | ||
| } | ||
| } | ||
|
|
||
| void cacheCapabilities(String feed, List<FeedOwnCapability> capabilities) { | ||
| _capabilities[feed] = capabilities; | ||
| } | ||
|
|
||
| Future<List<FeedOwnCapability>?> getCapabilities(String feed) async { | ||
| return _capabilities[feed] ?? | ||
| (await _fetchBatchedFeedCapabilities(feed)).getOrNull(); | ||
| } | ||
|
|
||
| void dispose() { | ||
| _fetchBatcher.dispose(); | ||
| } | ||
|
|
||
| Future<Result<Map<String, List<FeedOwnCapability>>>> _fetchWithRetry({ | ||
| required List<String> feeds, | ||
| bool isRetry = false, | ||
| }) async { | ||
| final result = await fetchCapabilities(feeds: feeds); | ||
| if (result.shouldRetry() && !isRetry) { | ||
| await Future<void>.delayed(const Duration(milliseconds: 500)); | ||
| return _fetchWithRetry(feeds: feeds, isRetry: true); | ||
| } | ||
|
Comment on lines
+57
to
+60
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it going to loop forever in case of no internet? Maybe better to limit it to n times. We can use the backoff from rate_limiter.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Because of the |
||
| return result; | ||
| } | ||
|
|
||
| Future<Result<List<FeedOwnCapability>>> _fetchBatchedFeedCapabilities( | ||
| String feed, | ||
| ) async { | ||
| final capabilities = await _fetchBatcher.add(feed); | ||
| return capabilities.map((capabilities) => capabilities[feed] ?? []); | ||
| } | ||
|
|
||
| void _mergeWithCache(Map<String, List<FeedOwnCapability>> capabilities) { | ||
| _capabilities.addAll(capabilities); | ||
| } | ||
| } | ||
|
|
||
| extension on Result<Map<String, List<FeedOwnCapability>>> { | ||
| bool shouldRetry() { | ||
| switch (this) { | ||
| case api.Success(): | ||
| return false; | ||
|
|
||
| case final api.Failure failure: | ||
| final error = failure.error; | ||
| if (error is! StreamDioException) { | ||
| return false; | ||
| } | ||
| final exception = error.exception; | ||
| if (exception is! HttpClientException) { | ||
| return false; | ||
| } | ||
| final statusCode = exception.statusCode; | ||
| if (statusCode == null) { | ||
| return false; | ||
| } | ||
| return statusCode < 100 || statusCode >= 500; | ||
| } | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I read it correctly that on failures we just give up? I'm thinking if it makes sense to have some retry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea, I was not sure yet what to do. Should we just add the list of failed feeds to the next batch, or shall we retry only this specific batch once, so the next batch is clean and can also be retried?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of adding them to the next batch, but no strong opinion. We could also check what they do in JS with @isekovanic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I've added a retry after half a second, but we can improve on this later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently I have not added a retry mechanism on JS, but the queued up IDs will simply not be purged unless the request is successful. So they'll be there the next time something arrives.
But, a retry is much better in my opinion as well, since otherwise you might need to wait. I'll add a retry as well on my side.