Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During using pool in async mode with POOL_GETMODE_TIMEDWAIT see an excess of opened and busy connects over max #390

Open
golubovai opened this issue Aug 29, 2024 · 5 comments
Labels
question Further information is requested

Comments

@golubovai
Copy link

Hello!
This test case can be used to clarify what is going:

async def test_5529(self):
        "5529 - ensure max limit of pool is not exceeded"
        proc_name = test_env.get_sleep_proc_name()
        async def work(pool: oracledb.AsyncConnectionPool):
            while True:
                conn = None
                try:
                    conn = await pool.acquire()
                    async with conn.cursor() as cursor:
                        await cursor.callproc(proc_name, [5])
                    break
                except Exception as e:
                    continue
                finally:
                    if conn is not None:
                        await conn.close()
                    
        pool = test_env.get_pool_async(min=0, max=2, increment=1, wait_timeout=500, getmode=oracledb.POOL_GETMODE_TIMEDWAIT)
        for _ in range(20):
            asyncio.create_task(work(pool))
        for _ in range(10):
            await asyncio.sleep(0.5)
            print(f'opened: {pool.opened} busy: {pool.busy} max: {pool.max}')
            self.assertTrue(pool.busy <= pool.max)
@golubovai golubovai added the question Further information is requested label Aug 29, 2024
@golubovai
Copy link
Author

golubovai commented Aug 29, 2024

It's this place in source:

async def _process_request(self, PooledConnRequest request):
        """
        Processes a request.
        """
        cdef BaseThinConnImpl conn_impl
        try:
            if request.requires_ping:
                try:
                    request.conn_impl.set_call_timeout(self._ping_timeout)
                    await request.conn_impl.ping()
                    request.conn_impl.set_call_timeout(0)
                    request.completed = True
                except exceptions.Error:
                    request.conn_impl._force_close()
                    request.conn_impl = None
            else:
                conn_impl = await self._create_conn_impl(request.params)
                if request.conn_impl is not None:
                    request.conn_impl._force_close()
                request.conn_impl = conn_impl
                request.conn_impl._is_pool_extra = request.is_extra
                request.completed = True
        except Exception as e:
            request.exception = e
        finally:
            request.in_progress = False
            request.bg_processing = False

On else we create new connection by non waiting request with no respect to value of current opened connections because condition in _get_next_request push it to process:

if not request.waiting \
                    or request.requires_ping \
                    or request.is_replacing \
                    or request.is_extra \
                    or self._open_count < self.max:

I think we can change this to:
elif self._open_count < self.max
or
elif request.waiting and self._open_count < self.max
depending on our wish to account timeouted request in connect creation.

@anthony-tuininga
Copy link
Member

anthony-tuininga commented Oct 26, 2024

The recent changes I pushed to the pool code should resolve this situation as well. Can you confirm, please?

@cjbj
Copy link
Member

cjbj commented Nov 7, 2024

@golubovai python-oracledb 2.5 has the change @anthony-tuininga made. Can you try it out and let us know whether it resolves your problem?

@rreboot
Copy link

rreboot commented Nov 15, 2024

Hi. I have a similar situation. Updating to the latest version did not work. With long running queries, the number of connections in the pool exceeds the maximum limit.

@anthony-tuininga
Copy link
Member

Can you provide more detail of your situation? What are the pool configuration parameters you are using? What are you using to determine the number of connections in the pool? Can you define "long running"? Anything else you can provide that would help determine the source of the issue? Ideally a test case that proves what you are saying would be helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants