Skip to content
This repository has been archived by the owner on Feb 21, 2023. It is now read-only.

Pool refactoring #129

Merged
merged 27 commits into from
Jan 13, 2017
Merged

Pool refactoring #129

merged 27 commits into from
Jan 13, 2017

Conversation

popravich
Copy link
Contributor

@popravich popravich commented Jun 6, 2016

This changes are intended to invert pool-to-redis dependency; Redis will be top-level API working
either with pool, or a single connection.

# replace this:
redis_pool = yield from create_pool(...)
with (yield from redis_pool) as redis:
    yield from redis.set('foo', 'bar')

# with this:
redis = yield from create_redis_pool(...)
yield from redis.set('foo', 'bar')

Sub tasks:

  • Implement new RedisPool;
  • Adopt Redis mixins to work with pool;
  • Add address to Connection;
  • Impement pool pub/sub;
  • Fix pipeline/multiexec execution;
  • Fix tests;
  • rewrite docs

@popravich popravich force-pushed the pool_refactoring branch 3 times, most recently from 13ffb1c to ffd32f7 Compare June 13, 2016 07:58
@coveralls
Copy link

coveralls commented Jun 22, 2016

Coverage Status

Coverage decreased (-1.0%) to 96.2% when pulling 742b7f9 on pool_refactoring into 8726c2e on master.

@coveralls
Copy link

coveralls commented Jun 22, 2016

Coverage Status

Coverage decreased (-0.6%) to 96.615% when pulling fda28de on pool_refactoring into 8726c2e on master.

@codecov-io
Copy link

codecov-io commented Nov 3, 2016

Current coverage is 96.48% (diff: 85.53%)

Merging #129 into master will decrease coverage by 0.49%

@@             master       #129   diff @@
==========================================
  Files            51         50     -1   
  Lines          6160       6267   +107   
  Methods           0          0          
  Messages          0          0          
  Branches        465        481    +16   
==========================================
+ Hits           5974       6047    +73   
- Misses          141        166    +25   
- Partials         45         54     +9   

Powered by Codecov. Last update b4f54fc...d0d25db

@argaen
Copy link
Contributor

argaen commented Nov 3, 2016

Do you think it is possible (or makes sense) to make the initialization of the Client/Pool synchronous like in aiomcache?

https://github.com/aio-libs/aiomcache#getting-started

This way we could avoid patterns like this one where all the functions need to connect to the pool before doing its operation.

@asvetlov
Copy link
Contributor

asvetlov commented Nov 3, 2016

I think aiomcache has a error in design.
It allows to create a aiomcache.Client with global lifetime extremely easy.
But lifetime of Client should be shorter than loop.
Using coroutines for making connection instances prevents from silly bugs.

Let's consider the following code:

class A:
     client = aiomcache.Client()

What's wrong with it? Hint: think about testing for example.

@argaen
Copy link
Contributor

argaen commented Nov 3, 2016

From a user perspective I prefer this: https://github.com/argaen/aiocache/blob/master/aiocache/backends/memcached.py#L11 than having to have something like https://github.com/argaen/aiocache/blob/master/aiocache/backends/redis.py#L120 and then call it for every method that needs to interact with redis...

Also from testing for me its easier to do:

mymemcached = MemcachedCache()
mymemcached.client = MagicMock()

Than having to patch the _connect function so it returns what the double call to await is supposed to return

@asvetlov
Copy link
Contributor

asvetlov commented Nov 3, 2016

@argaen what you really need is .post_init() coroutine.

In my snippet I mean not mocking but working with read memcache server in functional tests.

class A:
     client = aiomcache.Client()


async def test_a():
    a = A()
    await a.client.get('key')

test_a will hang forever because test suite creates new event loop for every test.
Proper test code should look like:

async def test_a(loop):
    a = A()
    a.client = aiomcache.Client(loop=loop)
    await a.client.get('key')

but overriding all instances which has grabbed event loop too early is very tedious and error prone.

@popravich
Copy link
Contributor Author

Do you think it is possible…

Yes, It is possible — you can instatiate RedisPool directly but it is kind of private API;

or makes sense…

No, as this would lead to late errors, i.e. if redis server is unreachable connection errors would
raised only when decorated function get called.

@popravich popravich changed the title Pool refactoring [WIP] Pool refactoring Jan 13, 2017
@popravich popravich merged commit 8b0efa2 into master Jan 13, 2017
@popravich popravich deleted the pool_refactoring branch January 13, 2017 15:12
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants