-
Notifications
You must be signed in to change notification settings - Fork 30
Conversation
Current coverage is 100% (diff: 100%)@@ master #677 diff @@
====================================
Files 46 46
Lines 9481 9518 +37
Methods 0 0
Messages 0 0
Branches 0 0
====================================
+ Hits 9481 9518 +37
Misses 0 0
Partials 0 0
|
val = "".join([year, month, str(hour).zfill(2), | ||
str(rand_int).zfill(4)]) | ||
yield int(val) | ||
raise StopIteration() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unneeded, generators raise it for you when finished
self.delete_uaids(batched) | ||
yield len(batched) | ||
|
||
raise StopIteration() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kill
% months_ago) | ||
|
||
count = 0 | ||
for deletes in router.drop_old_users(months_ago): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't batch_size be an argument to drop_old_users? it would work more as advertised and simplify this loop too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mainly wanted a way to rope in how frequently the pause happens, but wanted to still maximize the actual delete batch commands to the max supported by AWS. So that the approximate speed is controlled, a little slop is ok. It's entirely likely that there will be no pause at all specified since latency is probably going to reduce the max speed anyways.
@@ -104,7 +104,7 @@ def periodic_reporter(settings): | |||
settings.metrics.gauge("update.client.readers", | |||
len(reactor.getReaders())) | |||
settings.metrics.gauge("update.client.connections", | |||
len(settings.clients)) | |||
len(settings.clients)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has a few whitespace changes.. but this one looks like cruft (reduce the diff/kill please?)
I feel like sprinkling strftime calls all over db.py now.. |
4063c52
to
955560c
Compare
|
||
""" | ||
year = str(date.year) | ||
month = str(date.month).zfill(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this partially fixes #653?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, its how they index key is constructed.
connected in the given time-frame. | ||
|
||
The caller must iterate through this generator to trigger batch | ||
delete calls of the ``batch_size`` provided. Caller should wait as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should remove these, since we don't actually take batch_size
as a parameter. That or specify that batches are 25 items.
for deletes in router.drop_old_users(months_ago): | ||
click.echo("") | ||
count += deletes | ||
if count > batch_size: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like router.drop_old_users
will always return a value <= 25.
edit: me math good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct. How many should run before pausing is a different thing though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guess I'm confused by this function then.
If there's 100 records to be deleted, we fire off two sets of 25, sleep a second, then fire of the second 2 sets of 25, correct?
We only reset count to zero if we've exceeded 25 records, and we always increment record counts by a max of 25, or am I missing something key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The batch_size is passed in by the user running the script. It could be set to 50, 100, or 1000. After enough deletes have been run, it pauses.
@@ -1,7 +1,7 @@ | |||
import unittest | |||
import uuid | |||
|
|||
from autopush.exceptions import AutopushException | |||
from autopush.websocket import ms_time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW: moved this to utils.py
since it's used by more than just websocket
955560c
to
94f2240
Compare
Add's a new drop_user command that scans the AccessIndex for users that haven't connected in the given month and removes the route records. Closes #645
94f2240
to
3fc0335
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+ wfm
Add's a new drop_user command that scans the AccessIndex for
users that haven't connected in the given month and removes the
route records.
Closes #645