Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pending_limit into Scheduler (ref #19) #23

Closed
wants to merge 4 commits into from

Conversation

linw1995
Copy link

This PR for #19

  • Add parameter pending_limit into Scheduler
    for control pending jobs size
  • Add property pending_limit into Scheduler
  • Add property waiting_count into Scheduler

- Add parameter `pending_limit` into `Scheduler`
  for control pending jobs size
- Add property `pending_limit` into `Scheduler`
- Add property `waiting_count` into `Scheduler`
- Add method `_release_waiter` into `Scheduler`
@codecov
Copy link

codecov bot commented Dec 20, 2017

Codecov Report

Merging #23 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff          @@
##           master    #23   +/-   ##
=====================================
  Coverage     100%   100%           
=====================================
  Files           4      4           
  Lines         221    259   +38     
=====================================
+ Hits          221    259   +38
Impacted Files Coverage Δ
aiojobs/_scheduler.py 100% <100%> (ø) ⬆️
aiojobs/__init__.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3c8e688...7596e81. Read the comment docs.

@@ -10,20 +10,22 @@
from collections.abc import Sized, Iterable, Container
bases = Sized, Iterable, Container
else: # pragma: no cover
bases = (Collection,)
bases = (Collection, )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't touch the line, space is not needed.

@@ -71,8 +81,19 @@ def closed(self):
if self._closed:
raise RuntimeError("Scheduling a new job after closing")
job = Job(coro, self, self._loop)
should_start = (self._limit is None or
self.active_count < self._limit)
should_start = (self._limit is None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep existing formatting for the line.
The same should be applied to should_wait

@@ -60,6 +66,10 @@ def active_count(self):
def pending_count(self):
return len(self._pending)

@property
def waiting_count(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to expose the property IMHO

and self.pending_count >= self._pending_limit)
if not should_start and should_wait:
waiter = self._loop.create_future()
waiter._job = job
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No chance to accept this line: adding a custom attribute to asyncio Future has very bad smell.

self._exception_handler = exception_handler
self._failed_tasks = asyncio.Queue(loop=loop)
self._failed_task = loop.create_task(self._wait_failed())
self._pending = deque()
self._waiting = deque()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe using asyncio.Queue for _pending and not adding _waiting is more robust and simple solution.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using asyncio.Queue is simply, but can't limit the size of pending jobs , when set pending_limit == 0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asyncio.Queue(maxsize=pending_limit), infinite size if pending_limit=0.
Do you see any problem with this solution?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean spawn always waits until one active job closes when pending_limit=0. And the _pending has infinite size when pending_limit=None

@asvetlov
Copy link
Member

#24 is simpler

@asvetlov asvetlov closed this Mar 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants