-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tune] Add repr for ResultGrid class #31941
Conversation
ace2f6b
to
ea9a222
Compare
7ae7aed
to
b22bed7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just a few minor change suggestions.
with tune.checkpoint_dir(step=0) as checkpoint_dir: | ||
path = os.path.join(checkpoint_dir, "checkpoint") | ||
with open(path, "w") as f: | ||
f.write(json.dumps({"step": 0})) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can save a checkpiont with session.report
. This tune.checkpoint_dir
was the old API.
with tune.checkpoint_dir(step=0) as checkpoint_dir: | |
path = os.path.join(checkpoint_dir, "checkpoint") | |
with open(path, "w") as f: | |
f.write(json.dumps({"step": 0})) |
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
b22bed7
to
fac8f78
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few minor questions for the repr implementation.
For testing, this is currently and end to end test, but I don't think this is needed. We can just create Result objects manually and a ResultGrid containing these elements. This will lead to a test execution time of a few milliseconds compared to the few seconds it takes now to run the tune experiment.
Generally we should aim to use minimalistic tests if possible - we don't get any benefit from a full end to end test here, so we should just stick with a simple unit test.
tuner = tune.Tuner( | ||
f, | ||
run_config=air.RunConfig( | ||
name="test_result_grid_repr", | ||
local_dir=str(tmpdir / "test_result_grid_repr_results"), | ||
), | ||
param_space={"x": tune.grid_search([1, 2])}, | ||
) | ||
result_grid = tuner.fit() | ||
|
||
representation = result_grid.__repr__() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we instead construct the result grid manually and assert the layout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Manually creating a result grid will be way faster than running a full tuning experiment.
python/ray/air/result.py
Outdated
kws_repr = " " + ",\n ".join(kws) | ||
return "{}(\n{}\n)".format(type(self).__name__, kws_repr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's very hard to imagine how this looks like, can we add a comment what we expect here?
python/ray/tune/result_grid.py
Outdated
for result in self: | ||
result_repr = " " + result.__repr__().replace("\n", "\n ") | ||
all_results_repr += f"{result_repr},\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Can we use a list comprehension here?
- The replace looks a bit fishy. Can we maybe instead use a private method in
Result
for bothResult.__repr__
andResultGrid.__repr__
?
python/ray/tune/result_grid.py
Outdated
for result in self: | ||
result_repr = " " + result.__repr__().replace("\n", "\n ") | ||
all_results_repr += f"{result_repr},\n" | ||
return f"[\n{all_results_repr}]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we still have ResultGrid<[ ... ]>
in here?
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few more small suggestions, but should be good after that! The new _results
is definitely better, and +1 to creating the result objects directly in the test without running an actual experiment.
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! There's one failing test please ping me once resolved and CI is passing
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter>
Add __repr__() for ResultGrid class and prettify __repr__() of Result class. Signed-off-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter> Co-authored-by: Yunxuan Xiao <yunxuanx@Yunxuans-MBP.local.meter> Signed-off-by: Edward Oakes <ed.nmi.oakes@gmail.com>
Why are these changes needed?
Add repr() for ResultGrid class and prettify repr() of Result class.
The example representation:
Related issue number
Closes #31716
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.