Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added ttl parameter in cache class #405

Merged
merged 4 commits into from
May 27, 2018
Merged

Added ttl parameter in cache class #405

merged 4 commits into from
May 27, 2018

Conversation

jcugat
Copy link
Contributor

@jcugat jcugat commented May 26, 2018

As discussed in #387 ttl was the only parameter not allowed by default in the cache class. Caches instances that have this parameter in the constructor will use it by default in all the calls, unless a specific parameter is passed.

@codecov
Copy link

codecov bot commented May 26, 2018

Codecov Report

Merging #405 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #405      +/-   ##
==========================================
+ Coverage   99.76%   99.76%   +<.01%     
==========================================
  Files           9        9              
  Lines         861      866       +5     
  Branches       91       91              
==========================================
+ Hits          859      864       +5     
  Misses          2        2
Impacted Files Coverage Δ
aiocache/decorators.py 100% <100%> (ø) ⬆️
aiocache/base.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 50a690b...058a83b. Read the comment docs.

aiocache/base.py Outdated
@@ -99,9 +103,10 @@ class BaseCache:

def __init__(
self, serializer=None, plugins=None,
namespace=None, key_builder=None, timeout=5):
namespace=None, ttl=None, key_builder=None, timeout=5):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add it as the last attribute? Otherwise change could be breaking if someone is not naming the attributes when calling

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

aiocache/base.py Outdated
@@ -9,6 +9,8 @@

logger = logging.getLogger(__file__)

sentinel = object()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets put it uppercase. Since its used in other modules, I prefer this to be more visible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

await self.cache.set(key, value, ttl=self.ttl)
kwargs = {}
if self.ttl is not sentinel:
kwargs['ttl'] = self.ttl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to replicate the logic here? This is already done in the class so if we just propagate self.ttl here (having sentinel as default value) it should be enough right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to avoid having to update all the tests to use SENTINEL when mocking the call. Updated.

@@ -291,8 +295,11 @@ def get_cache_keys(self, f, args, kwargs):

async def set_in_cache(self, result, fn_args, fn_kwargs):
try:
kwargs = {}
if self.ttl is not sentinel:
kwargs['ttl'] = self.ttl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as before

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

await base_cache.add(pytest.KEY, "value")

assert base_cache._add.call_count == 1
assert base_cache._add.call_args[1]['ttl'] == 10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets use assert_called_once_with, more compact

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

def set_test_ttl(self, base_cache):
base_cache.ttl = 10
yield
base_cache.ttl = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to reset the ttl to None every time. The base_cache fixture is per test so this fixture is not needed, just set the ttl of base_cache directly in the test whenever you need it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

await base_cache.add(pytest.KEY, "value", ttl=None)

assert base_cache._add.call_count == 1
assert base_cache._add.call_args[1]['ttl'] is None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are missing the case where we don't pass ttl explicitly and its not set either at he class too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

@argaen argaen merged commit 48cab91 into aio-libs:master May 27, 2018
@jcugat jcugat deleted the feature/cache_class_ttl branch June 15, 2019 14:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants