-
Notifications
You must be signed in to change notification settings - Fork 60
BalancingLearner: add a "cycle" strategy, sampling the learners one by one #188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1ab7a88
to
396fa52
Compare
396fa52
to
858ca89
Compare
@@ -176,6 +181,20 @@ def _ask_and_tell_based_on_npoints(self, n): | |||
points, loss_improvements = map(list, zip(*selected)) | |||
return points, loss_improvements | |||
|
|||
def _ask_and_tell_based_on_cycle(self, n): | |||
if not hasattr(self, '_cycle'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could also just be set in the __init__
.
I think this is rather an indication of If this is a preferable implementation, we should rather remove the other strategy. However it'd make most sense if |
But |
I see, that's different indeed. Is that a behavior we would want to support? It is also different from essentially everything else in adaptive: it tells the learner what to do and now what goal to reach. |
858ca89
to
e6afb71
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This strategy is simple enough that I think we can just add it, subject to the minor modifications I requested.
@@ -173,6 +178,20 @@ def _ask_and_tell_based_on_npoints(self, n): | |||
points, loss_improvements = map(list, zip(*selected)) | |||
return points, loss_improvements | |||
|
|||
def _ask_and_tell_based_on_cycle(self, n): | |||
if not hasattr(self, "_cycle"): | |||
self._cycle = itertools.cycle(range(len(self.learners))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do this in __init__
as I commented above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you wrote to do in the strategy property setter?
e6afb71
to
d25f3d1
Compare
@jbweston I think I've done what you suggested (up to #188 (comment)). Merge if you like. |
@@ -107,10 +110,13 @@ def strategy(self, strategy): | |||
self._ask_and_tell = self._ask_and_tell_based_on_loss | |||
elif strategy == "npoints": | |||
self._ask_and_tell = self._ask_and_tell_based_on_npoints | |||
elif strategy == "cycle": | |||
self._ask_and_tell = self._ask_and_tell_based_on_cycle | |||
self._cycle = itertools.cycle(range(len(self.learners))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now the cycle will be reset every time the strategy is set dynamically. I'm not sure what the best thing to do here is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That might be the intention? We could also just put in in __init__
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This is useful for example for the
AverageLearner1D
and2D
where sampling onnpoints
won't sample all learners equally, because that's not hownpoints
is defined there.