-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks endpoints to Orion Web API #996
Add benchmarks endpoints to Orion Web API #996
Conversation
2381e1a
to
cd0617c
Compare
Hi @bouthilx ! I added a supplementary commit about tests: notoraptor@cd0617c There was a TODO about adding There is one remaining TODO from your code here, but I don't know if it must be resolved in this PR: https://github.com/notoraptor/orion/blob/feature/benchmark_webapi_rebased/src/orion/benchmark/__init__.py#L105 |
resp.body = json.dumps(response) | ||
|
||
|
||
def _find_latest_versions(experiments): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the functions below where copy pasted from experiments_resource but are not used. Sorry, that was me. Could you please remove them? 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I think the other functions are not used as well.
TODO: Check coverage |
11b8bad
to
f8e850f
Compare
Why: Building the benchmark object (setup_experiments in particular) can be quite expensive (>60s). When we only want to execute one particular analysis, the build does a lot of useless fetch on the storage. The experiments should be built only when they are needed. Relatedly, the analysis method should allow selecting specific study. How: Only call setup_experiments on study when we try to access `study.get_experiments()`. The task_ids which seemed useless are removed from `experiments_info` as well.
Why: The interface is bloated and difficult to memorise. One of the reason we decided to go this way is that we previously needed to instantiate an algo to access its attributes. We can now use algo_factory.get_class() to have the class object only and access the class attributes. Thanks to this we can now simply set deterministic as a class attribute for the algorithms and it simplifies the interface of the benchmark. Algorithms have deterministic set to False by default since they generally have sources of variations.
- Add gunicorn settings from YAML to Gunicon app - increase gunicorn timeout to 300 seconds
… in with-statement, to make sure lock.acquire and lock.release are called
0f006ce
to
71dae0a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coverage it good, thanks a lot!!
Description
Hi @bouthilx ! I don't know where's the best place to report this work, so I made a PR here.
This PR adds new Orion Web API entries to get benchmarks.
The PR is based on your own branch ( https://github.com/bouthilx/orion/tree/feature/benchmark_webapi ) rebased with
develop
branch and extended with few commits to fix issues.Changes
/benchmarks
: get available benchmarks/benchmarks/:name
: get given benchmarktask_num
withrepetitions
Checklist
Tests
$ tox -e py38
; replace38
by your Python version if necessary)Documentation
Quality
$ tox -e lint
)